./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img

📌 S Retain class distribution for seed 9:
Class 0: 5284
Class 1: 4210

📌 S Forget class distribution for seed 9:
Class 0: 527
Class 1: 527
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
./data/MUCAC/CelebAMask-HQ/CelebA-HQ-img
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/9494]	Loss: 0.6954	LR: 0.000000
Training Epoch: 1 [512/9494]	Loss: 0.6978	LR: 0.002632
Training Epoch: 1 [768/9494]	Loss: 0.6826	LR: 0.005263
Training Epoch: 1 [1024/9494]	Loss: 0.7044	LR: 0.007895
Training Epoch: 1 [1280/9494]	Loss: 0.7227	LR: 0.010526
Training Epoch: 1 [1536/9494]	Loss: 0.6796	LR: 0.013158
Training Epoch: 1 [1792/9494]	Loss: 0.8042	LR: 0.015789
Training Epoch: 1 [2048/9494]	Loss: 0.7677	LR: 0.018421
Training Epoch: 1 [2304/9494]	Loss: 0.6992	LR: 0.021053
Training Epoch: 1 [2560/9494]	Loss: 0.7320	LR: 0.023684
Training Epoch: 1 [2816/9494]	Loss: 0.9449	LR: 0.026316
Training Epoch: 1 [3072/9494]	Loss: 1.1995	LR: 0.028947
Training Epoch: 1 [3328/9494]	Loss: 0.7521	LR: 0.031579
Training Epoch: 1 [3584/9494]	Loss: 0.7891	LR: 0.034211
Training Epoch: 1 [3840/9494]	Loss: 0.8189	LR: 0.036842
Training Epoch: 1 [4096/9494]	Loss: 1.3535	LR: 0.039474
Training Epoch: 1 [4352/9494]	Loss: 0.9434	LR: 0.042105
Training Epoch: 1 [4608/9494]	Loss: 0.6956	LR: 0.044737
Training Epoch: 1 [4864/9494]	Loss: 0.8740	LR: 0.047368
Training Epoch: 1 [5120/9494]	Loss: 1.6520	LR: 0.050000
Training Epoch: 1 [5376/9494]	Loss: 0.9865	LR: 0.052632
Training Epoch: 1 [5632/9494]	Loss: 0.7078	LR: 0.055263
Training Epoch: 1 [5888/9494]	Loss: 0.7654	LR: 0.057895
Training Epoch: 1 [6144/9494]	Loss: 0.6960	LR: 0.060526
Training Epoch: 1 [6400/9494]	Loss: 0.8852	LR: 0.063158
Training Epoch: 1 [6656/9494]	Loss: 0.7249	LR: 0.065789
Training Epoch: 1 [6912/9494]	Loss: 0.7051	LR: 0.068421
Training Epoch: 1 [7168/9494]	Loss: 0.7276	LR: 0.071053
Training Epoch: 1 [7424/9494]	Loss: 0.7474	LR: 0.073684
Training Epoch: 1 [7680/9494]	Loss: 0.6841	LR: 0.076316
Training Epoch: 1 [7936/9494]	Loss: 0.8506	LR: 0.078947
Training Epoch: 1 [8192/9494]	Loss: 0.9056	LR: 0.081579
Training Epoch: 1 [8448/9494]	Loss: 0.8439	LR: 0.084211
Training Epoch: 1 [8704/9494]	Loss: 0.7223	LR: 0.086842
Training Epoch: 1 [8960/9494]	Loss: 0.7731	LR: 0.089474
Training Epoch: 1 [9216/9494]	Loss: 0.9621	LR: 0.092105
Training Epoch: 1 [9472/9494]	Loss: 0.7611	LR: 0.094737
Training Epoch: 1 [9494/9494]	Loss: 1.4086	LR: 0.097368
Epoch 1 - Average Train Loss: 0.8299, Train Accuracy: 0.5176
Epoch 1 training time consumed: 333.25s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.4446, Accuracy: 0.5550, Time consumed:8.14s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-1-best.pth
Training Epoch: 2 [256/9494]	Loss: 0.7968	LR: 0.100000
Training Epoch: 2 [512/9494]	Loss: 1.1658	LR: 0.100000
Training Epoch: 2 [768/9494]	Loss: 0.7139	LR: 0.100000
Training Epoch: 2 [1024/9494]	Loss: 0.7858	LR: 0.100000
Training Epoch: 2 [1280/9494]	Loss: 0.7640	LR: 0.100000
Training Epoch: 2 [1536/9494]	Loss: 0.7061	LR: 0.100000
Training Epoch: 2 [1792/9494]	Loss: 0.7684	LR: 0.100000
Training Epoch: 2 [2048/9494]	Loss: 0.6908	LR: 0.100000
Training Epoch: 2 [2304/9494]	Loss: 0.7552	LR: 0.100000
Training Epoch: 2 [2560/9494]	Loss: 0.6823	LR: 0.100000
Training Epoch: 2 [2816/9494]	Loss: 0.7037	LR: 0.100000
Training Epoch: 2 [3072/9494]	Loss: 0.6980	LR: 0.100000
Training Epoch: 2 [3328/9494]	Loss: 0.7194	LR: 0.100000
Training Epoch: 2 [3584/9494]	Loss: 0.6858	LR: 0.100000
Training Epoch: 2 [3840/9494]	Loss: 0.6972	LR: 0.100000
Training Epoch: 2 [4096/9494]	Loss: 0.7286	LR: 0.100000
Training Epoch: 2 [4352/9494]	Loss: 0.7479	LR: 0.100000
Training Epoch: 2 [4608/9494]	Loss: 0.6440	LR: 0.100000
Training Epoch: 2 [4864/9494]	Loss: 0.6910	LR: 0.100000
Training Epoch: 2 [5120/9494]	Loss: 0.7046	LR: 0.100000
Training Epoch: 2 [5376/9494]	Loss: 0.6998	LR: 0.100000
Training Epoch: 2 [5632/9494]	Loss: 0.6808	LR: 0.100000
Training Epoch: 2 [5888/9494]	Loss: 0.6880	LR: 0.100000
Training Epoch: 2 [6144/9494]	Loss: 0.7015	LR: 0.100000
Training Epoch: 2 [6400/9494]	Loss: 0.6886	LR: 0.100000
Training Epoch: 2 [6656/9494]	Loss: 0.6756	LR: 0.100000
Training Epoch: 2 [6912/9494]	Loss: 0.7005	LR: 0.100000
Training Epoch: 2 [7168/9494]	Loss: 0.7198	LR: 0.100000
Training Epoch: 2 [7424/9494]	Loss: 0.6775	LR: 0.100000
Training Epoch: 2 [7680/9494]	Loss: 0.6611	LR: 0.100000
Training Epoch: 2 [7936/9494]	Loss: 0.6860	LR: 0.100000
Training Epoch: 2 [8192/9494]	Loss: 0.6572	LR: 0.100000
Training Epoch: 2 [8448/9494]	Loss: 0.7735	LR: 0.100000
Training Epoch: 2 [8704/9494]	Loss: 0.6742	LR: 0.100000
Training Epoch: 2 [8960/9494]	Loss: 0.6840	LR: 0.100000
Training Epoch: 2 [9216/9494]	Loss: 0.6886	LR: 0.100000
Training Epoch: 2 [9472/9494]	Loss: 0.6850	LR: 0.100000
Training Epoch: 2 [9494/9494]	Loss: 0.7072	LR: 0.100000
Epoch 2 - Average Train Loss: 0.7186, Train Accuracy: 0.5499
Epoch 2 training time consumed: 137.93s
Evaluating Network.....
Test set: Epoch: 2, Average loss: 0.0044, Accuracy: 0.5235, Time consumed:8.12s
Training Epoch: 3 [256/9494]	Loss: 0.6924	LR: 0.100000
Training Epoch: 3 [512/9494]	Loss: 0.6719	LR: 0.100000
Training Epoch: 3 [768/9494]	Loss: 0.7664	LR: 0.100000
Training Epoch: 3 [1024/9494]	Loss: 0.6767	LR: 0.100000
Training Epoch: 3 [1280/9494]	Loss: 0.7096	LR: 0.100000
Training Epoch: 3 [1536/9494]	Loss: 0.7112	LR: 0.100000
Training Epoch: 3 [1792/9494]	Loss: 0.6798	LR: 0.100000
Training Epoch: 3 [2048/9494]	Loss: 0.6632	LR: 0.100000
Training Epoch: 3 [2304/9494]	Loss: 0.7231	LR: 0.100000
Training Epoch: 3 [2560/9494]	Loss: 0.7062	LR: 0.100000
Training Epoch: 3 [2816/9494]	Loss: 0.6728	LR: 0.100000
Training Epoch: 3 [3072/9494]	Loss: 0.6848	LR: 0.100000
Training Epoch: 3 [3328/9494]	Loss: 0.6539	LR: 0.100000
Training Epoch: 3 [3584/9494]	Loss: 0.6964	LR: 0.100000
Training Epoch: 3 [3840/9494]	Loss: 0.7798	LR: 0.100000
Training Epoch: 3 [4096/9494]	Loss: 0.7114	LR: 0.100000
Training Epoch: 3 [4352/9494]	Loss: 0.7053	LR: 0.100000
Training Epoch: 3 [4608/9494]	Loss: 0.6859	LR: 0.100000
Training Epoch: 3 [4864/9494]	Loss: 0.6716	LR: 0.100000
Training Epoch: 3 [5120/9494]	Loss: 0.7038	LR: 0.100000
Training Epoch: 3 [5376/9494]	Loss: 0.6726	LR: 0.100000
Training Epoch: 3 [5632/9494]	Loss: 0.7085	LR: 0.100000
Training Epoch: 3 [5888/9494]	Loss: 0.7646	LR: 0.100000
Training Epoch: 3 [6144/9494]	Loss: 0.7084	LR: 0.100000
Training Epoch: 3 [6400/9494]	Loss: 0.6774	LR: 0.100000
Training Epoch: 3 [6656/9494]	Loss: 0.6959	LR: 0.100000
Training Epoch: 3 [6912/9494]	Loss: 0.7170	LR: 0.100000
Training Epoch: 3 [7168/9494]	Loss: 0.6949	LR: 0.100000
Training Epoch: 3 [7424/9494]	Loss: 0.7264	LR: 0.100000
Training Epoch: 3 [7680/9494]	Loss: 0.6640	LR: 0.100000
Training Epoch: 3 [7936/9494]	Loss: 0.6431	LR: 0.100000
Training Epoch: 3 [8192/9494]	Loss: 0.7055	LR: 0.100000
Training Epoch: 3 [8448/9494]	Loss: 0.6699	LR: 0.100000
Training Epoch: 3 [8704/9494]	Loss: 0.6643	LR: 0.100000
Training Epoch: 3 [8960/9494]	Loss: 0.6692	LR: 0.100000
Training Epoch: 3 [9216/9494]	Loss: 0.6608	LR: 0.100000
Training Epoch: 3 [9472/9494]	Loss: 0.6678	LR: 0.100000
Training Epoch: 3 [9494/9494]	Loss: 0.7346	LR: 0.100000
Epoch 3 - Average Train Loss: 0.6941, Train Accuracy: 0.5818
Epoch 3 training time consumed: 137.86s
Evaluating Network.....
Test set: Epoch: 3, Average loss: 0.0033, Accuracy: 0.5521, Time consumed:8.05s
Training Epoch: 4 [256/9494]	Loss: 0.7390	LR: 0.100000
Training Epoch: 4 [512/9494]	Loss: 0.7770	LR: 0.100000
Training Epoch: 4 [768/9494]	Loss: 0.7326	LR: 0.100000
Training Epoch: 4 [1024/9494]	Loss: 0.7528	LR: 0.100000
Training Epoch: 4 [1280/9494]	Loss: 0.7038	LR: 0.100000
Training Epoch: 4 [1536/9494]	Loss: 0.7219	LR: 0.100000
Training Epoch: 4 [1792/9494]	Loss: 0.6987	LR: 0.100000
Training Epoch: 4 [2048/9494]	Loss: 0.7239	LR: 0.100000
Training Epoch: 4 [2304/9494]	Loss: 0.7336	LR: 0.100000
Training Epoch: 4 [2560/9494]	Loss: 0.7046	LR: 0.100000
Training Epoch: 4 [2816/9494]	Loss: 0.6960	LR: 0.100000
Training Epoch: 4 [3072/9494]	Loss: 0.7145	LR: 0.100000
Training Epoch: 4 [3328/9494]	Loss: 0.7056	LR: 0.100000
Training Epoch: 4 [3584/9494]	Loss: 0.6910	LR: 0.100000
Training Epoch: 4 [3840/9494]	Loss: 0.6921	LR: 0.100000
Training Epoch: 4 [4096/9494]	Loss: 0.6968	LR: 0.100000
Training Epoch: 4 [4352/9494]	Loss: 0.6931	LR: 0.100000
Training Epoch: 4 [4608/9494]	Loss: 0.7156	LR: 0.100000
Training Epoch: 4 [4864/9494]	Loss: 0.7081	LR: 0.100000
Training Epoch: 4 [5120/9494]	Loss: 0.6935	LR: 0.100000
Training Epoch: 4 [5376/9494]	Loss: 0.6707	LR: 0.100000
Training Epoch: 4 [5632/9494]	Loss: 0.6831	LR: 0.100000
Training Epoch: 4 [5888/9494]	Loss: 0.6748	LR: 0.100000
Training Epoch: 4 [6144/9494]	Loss: 0.6708	LR: 0.100000
Training Epoch: 4 [6400/9494]	Loss: 0.6930	LR: 0.100000
Training Epoch: 4 [6656/9494]	Loss: 0.6620	LR: 0.100000
Training Epoch: 4 [6912/9494]	Loss: 0.6772	LR: 0.100000
Training Epoch: 4 [7168/9494]	Loss: 0.6739	LR: 0.100000
Training Epoch: 4 [7424/9494]	Loss: 0.6806	LR: 0.100000
Training Epoch: 4 [7680/9494]	Loss: 0.6926	LR: 0.100000
Training Epoch: 4 [7936/9494]	Loss: 0.6692	LR: 0.100000
Training Epoch: 4 [8192/9494]	Loss: 0.6689	LR: 0.100000
Training Epoch: 4 [8448/9494]	Loss: 0.6672	LR: 0.100000
Training Epoch: 4 [8704/9494]	Loss: 0.6730	LR: 0.100000
Training Epoch: 4 [8960/9494]	Loss: 0.6870	LR: 0.100000
Training Epoch: 4 [9216/9494]	Loss: 0.7119	LR: 0.100000
Training Epoch: 4 [9472/9494]	Loss: 0.6917	LR: 0.100000
Training Epoch: 4 [9494/9494]	Loss: 0.7407	LR: 0.100000
Epoch 4 - Average Train Loss: 0.6985, Train Accuracy: 0.5319
Epoch 4 training time consumed: 137.86s
Evaluating Network.....
Test set: Epoch: 4, Average loss: 0.0029, Accuracy: 0.5801, Time consumed:8.29s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-4-best.pth
Training Epoch: 5 [256/9494]	Loss: 0.6784	LR: 0.100000
Training Epoch: 5 [512/9494]	Loss: 0.6995	LR: 0.100000
Training Epoch: 5 [768/9494]	Loss: 0.6765	LR: 0.100000
Training Epoch: 5 [1024/9494]	Loss: 0.6695	LR: 0.100000
Training Epoch: 5 [1280/9494]	Loss: 0.6968	LR: 0.100000
Training Epoch: 5 [1536/9494]	Loss: 0.6807	LR: 0.100000
Training Epoch: 5 [1792/9494]	Loss: 0.6839	LR: 0.100000
Training Epoch: 5 [2048/9494]	Loss: 0.6991	LR: 0.100000
Training Epoch: 5 [2304/9494]	Loss: 0.6857	LR: 0.100000
Training Epoch: 5 [2560/9494]	Loss: 0.6921	LR: 0.100000
Training Epoch: 5 [2816/9494]	Loss: 0.6954	LR: 0.100000
Training Epoch: 5 [3072/9494]	Loss: 0.6834	LR: 0.100000
Training Epoch: 5 [3328/9494]	Loss: 0.6778	LR: 0.100000
Training Epoch: 5 [3584/9494]	Loss: 0.6764	LR: 0.100000
Training Epoch: 5 [3840/9494]	Loss: 0.6878	LR: 0.100000
Training Epoch: 5 [4096/9494]	Loss: 0.6839	LR: 0.100000
Training Epoch: 5 [4352/9494]	Loss: 0.6772	LR: 0.100000
Training Epoch: 5 [4608/9494]	Loss: 0.6844	LR: 0.100000
Training Epoch: 5 [4864/9494]	Loss: 0.6691	LR: 0.100000
Training Epoch: 5 [5120/9494]	Loss: 0.6681	LR: 0.100000
Training Epoch: 5 [5376/9494]	Loss: 0.6584	LR: 0.100000
Training Epoch: 5 [5632/9494]	Loss: 0.6449	LR: 0.100000
Training Epoch: 5 [5888/9494]	Loss: 0.6748	LR: 0.100000
Training Epoch: 5 [6144/9494]	Loss: 0.6921	LR: 0.100000
Training Epoch: 5 [6400/9494]	Loss: 0.6617	LR: 0.100000
Training Epoch: 5 [6656/9494]	Loss: 0.6668	LR: 0.100000
Training Epoch: 5 [6912/9494]	Loss: 0.6597	LR: 0.100000
Training Epoch: 5 [7168/9494]	Loss: 0.6475	LR: 0.100000
Training Epoch: 5 [7424/9494]	Loss: 0.6680	LR: 0.100000
Training Epoch: 5 [7680/9494]	Loss: 0.6720	LR: 0.100000
Training Epoch: 5 [7936/9494]	Loss: 0.6570	LR: 0.100000
Training Epoch: 5 [8192/9494]	Loss: 0.7020	LR: 0.100000
Training Epoch: 5 [8448/9494]	Loss: 0.6791	LR: 0.100000
Training Epoch: 5 [8704/9494]	Loss: 0.6790	LR: 0.100000
Training Epoch: 5 [8960/9494]	Loss: 0.6794	LR: 0.100000
Training Epoch: 5 [9216/9494]	Loss: 0.7009	LR: 0.100000
Training Epoch: 5 [9472/9494]	Loss: 0.6680	LR: 0.100000
Training Epoch: 5 [9494/9494]	Loss: 0.8292	LR: 0.100000
Epoch 5 - Average Train Loss: 0.6781, Train Accuracy: 0.5695
Epoch 5 training time consumed: 137.50s
Evaluating Network.....
Test set: Epoch: 5, Average loss: 0.0031, Accuracy: 0.5671, Time consumed:8.31s
Training Epoch: 6 [256/9494]	Loss: 0.6721	LR: 0.100000
Training Epoch: 6 [512/9494]	Loss: 0.6593	LR: 0.100000
Training Epoch: 6 [768/9494]	Loss: 0.6677	LR: 0.100000
Training Epoch: 6 [1024/9494]	Loss: 0.7959	LR: 0.100000
Training Epoch: 6 [1280/9494]	Loss: 0.6906	LR: 0.100000
Training Epoch: 6 [1536/9494]	Loss: 0.6851	LR: 0.100000
Training Epoch: 6 [1792/9494]	Loss: 0.6846	LR: 0.100000
Training Epoch: 6 [2048/9494]	Loss: 0.7005	LR: 0.100000
Training Epoch: 6 [2304/9494]	Loss: 0.6850	LR: 0.100000
Training Epoch: 6 [2560/9494]	Loss: 0.6975	LR: 0.100000
Training Epoch: 6 [2816/9494]	Loss: 0.6905	LR: 0.100000
Training Epoch: 6 [3072/9494]	Loss: 0.6905	LR: 0.100000
Training Epoch: 6 [3328/9494]	Loss: 0.6835	LR: 0.100000
Training Epoch: 6 [3584/9494]	Loss: 0.6936	LR: 0.100000
Training Epoch: 6 [3840/9494]	Loss: 0.6923	LR: 0.100000
Training Epoch: 6 [4096/9494]	Loss: 0.6779	LR: 0.100000
Training Epoch: 6 [4352/9494]	Loss: 0.6810	LR: 0.100000
Training Epoch: 6 [4608/9494]	Loss: 0.6747	LR: 0.100000
Training Epoch: 6 [4864/9494]	Loss: 0.6801	LR: 0.100000
Training Epoch: 6 [5120/9494]	Loss: 0.6874	LR: 0.100000
Training Epoch: 6 [5376/9494]	Loss: 0.6726	LR: 0.100000
Training Epoch: 6 [5632/9494]	Loss: 0.6848	LR: 0.100000
Training Epoch: 6 [5888/9494]	Loss: 0.6850	LR: 0.100000
Training Epoch: 6 [6144/9494]	Loss: 0.6727	LR: 0.100000
Training Epoch: 6 [6400/9494]	Loss: 0.6739	LR: 0.100000
Training Epoch: 6 [6656/9494]	Loss: 0.6451	LR: 0.100000
Training Epoch: 6 [6912/9494]	Loss: 0.6473	LR: 0.100000
Training Epoch: 6 [7168/9494]	Loss: 0.6855	LR: 0.100000
Training Epoch: 6 [7424/9494]	Loss: 0.6496	LR: 0.100000
Training Epoch: 6 [7680/9494]	Loss: 0.6947	LR: 0.100000
Training Epoch: 6 [7936/9494]	Loss: 0.6605	LR: 0.100000
Training Epoch: 6 [8192/9494]	Loss: 0.6887	LR: 0.100000
Training Epoch: 6 [8448/9494]	Loss: 0.6777	LR: 0.100000
Training Epoch: 6 [8704/9494]	Loss: 0.6885	LR: 0.100000
Training Epoch: 6 [8960/9494]	Loss: 0.6689	LR: 0.100000
Training Epoch: 6 [9216/9494]	Loss: 0.6825	LR: 0.100000
Training Epoch: 6 [9472/9494]	Loss: 0.6805	LR: 0.100000
Training Epoch: 6 [9494/9494]	Loss: 0.6613	LR: 0.100000
Epoch 6 - Average Train Loss: 0.6823, Train Accuracy: 0.5641
Epoch 6 training time consumed: 137.90s
Evaluating Network.....
Test set: Epoch: 6, Average loss: 0.0033, Accuracy: 0.5545, Time consumed:8.25s
Training Epoch: 7 [256/9494]	Loss: 0.6587	LR: 0.100000
Training Epoch: 7 [512/9494]	Loss: 0.6871	LR: 0.100000
Training Epoch: 7 [768/9494]	Loss: 0.6760	LR: 0.100000
Training Epoch: 7 [1024/9494]	Loss: 0.7048	LR: 0.100000
Training Epoch: 7 [1280/9494]	Loss: 0.7053	LR: 0.100000
Training Epoch: 7 [1536/9494]	Loss: 0.6841	LR: 0.100000
Training Epoch: 7 [1792/9494]	Loss: 0.6732	LR: 0.100000
Training Epoch: 7 [2048/9494]	Loss: 0.6762	LR: 0.100000
Training Epoch: 7 [2304/9494]	Loss: 0.6649	LR: 0.100000
Training Epoch: 7 [2560/9494]	Loss: 0.6734	LR: 0.100000
Training Epoch: 7 [2816/9494]	Loss: 0.6842	LR: 0.100000
Training Epoch: 7 [3072/9494]	Loss: 0.6821	LR: 0.100000
Training Epoch: 7 [3328/9494]	Loss: 0.6590	LR: 0.100000
Training Epoch: 7 [3584/9494]	Loss: 0.6604	LR: 0.100000
Training Epoch: 7 [3840/9494]	Loss: 0.6777	LR: 0.100000
Training Epoch: 7 [4096/9494]	Loss: 0.6747	LR: 0.100000
Training Epoch: 7 [4352/9494]	Loss: 0.6730	LR: 0.100000
Training Epoch: 7 [4608/9494]	Loss: 0.7084	LR: 0.100000
Training Epoch: 7 [4864/9494]	Loss: 0.6722	LR: 0.100000
Training Epoch: 7 [5120/9494]	Loss: 0.6577	LR: 0.100000
Training Epoch: 7 [5376/9494]	Loss: 0.6659	LR: 0.100000
Training Epoch: 7 [5632/9494]	Loss: 0.6773	LR: 0.100000
Training Epoch: 7 [5888/9494]	Loss: 0.6721	LR: 0.100000
Training Epoch: 7 [6144/9494]	Loss: 0.6777	LR: 0.100000
Training Epoch: 7 [6400/9494]	Loss: 0.6827	LR: 0.100000
Training Epoch: 7 [6656/9494]	Loss: 0.6692	LR: 0.100000
Training Epoch: 7 [6912/9494]	Loss: 0.6651	LR: 0.100000
Training Epoch: 7 [7168/9494]	Loss: 0.6728	LR: 0.100000
Training Epoch: 7 [7424/9494]	Loss: 0.6736	LR: 0.100000
Training Epoch: 7 [7680/9494]	Loss: 0.6708	LR: 0.100000
Training Epoch: 7 [7936/9494]	Loss: 0.6695	LR: 0.100000
Training Epoch: 7 [8192/9494]	Loss: 0.7054	LR: 0.100000
Training Epoch: 7 [8448/9494]	Loss: 0.6586	LR: 0.100000
Training Epoch: 7 [8704/9494]	Loss: 0.6891	LR: 0.100000
Training Epoch: 7 [8960/9494]	Loss: 0.6670	LR: 0.100000
Training Epoch: 7 [9216/9494]	Loss: 0.6642	LR: 0.100000
Training Epoch: 7 [9472/9494]	Loss: 0.6708	LR: 0.100000
Training Epoch: 7 [9494/9494]	Loss: 0.6819	LR: 0.100000
Epoch 7 - Average Train Loss: 0.6758, Train Accuracy: 0.5727
Epoch 7 training time consumed: 137.61s
Evaluating Network.....
Test set: Epoch: 7, Average loss: 0.0029, Accuracy: 0.6126, Time consumed:8.11s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-7-best.pth
Training Epoch: 8 [256/9494]	Loss: 0.6656	LR: 0.100000
Training Epoch: 8 [512/9494]	Loss: 0.6736	LR: 0.100000
Training Epoch: 8 [768/9494]	Loss: 0.6716	LR: 0.100000
Training Epoch: 8 [1024/9494]	Loss: 0.6829	LR: 0.100000
Training Epoch: 8 [1280/9494]	Loss: 0.6562	LR: 0.100000
Training Epoch: 8 [1536/9494]	Loss: 0.6680	LR: 0.100000
Training Epoch: 8 [1792/9494]	Loss: 0.6606	LR: 0.100000
Training Epoch: 8 [2048/9494]	Loss: 0.6890	LR: 0.100000
Training Epoch: 8 [2304/9494]	Loss: 0.6542	LR: 0.100000
Training Epoch: 8 [2560/9494]	Loss: 0.6654	LR: 0.100000
Training Epoch: 8 [2816/9494]	Loss: 0.6597	LR: 0.100000
Training Epoch: 8 [3072/9494]	Loss: 0.6668	LR: 0.100000
Training Epoch: 8 [3328/9494]	Loss: 0.6431	LR: 0.100000
Training Epoch: 8 [3584/9494]	Loss: 0.6770	LR: 0.100000
Training Epoch: 8 [3840/9494]	Loss: 0.6701	LR: 0.100000
Training Epoch: 8 [4096/9494]	Loss: 0.6595	LR: 0.100000
Training Epoch: 8 [4352/9494]	Loss: 0.6876	LR: 0.100000
Training Epoch: 8 [4608/9494]	Loss: 0.6757	LR: 0.100000
Training Epoch: 8 [4864/9494]	Loss: 0.6710	LR: 0.100000
Training Epoch: 8 [5120/9494]	Loss: 0.6436	LR: 0.100000
Training Epoch: 8 [5376/9494]	Loss: 0.6643	LR: 0.100000
Training Epoch: 8 [5632/9494]	Loss: 0.6684	LR: 0.100000
Training Epoch: 8 [5888/9494]	Loss: 0.6422	LR: 0.100000
Training Epoch: 8 [6144/9494]	Loss: 0.6766	LR: 0.100000
Training Epoch: 8 [6400/9494]	Loss: 0.6644	LR: 0.100000
Training Epoch: 8 [6656/9494]	Loss: 0.6668	LR: 0.100000
Training Epoch: 8 [6912/9494]	Loss: 0.6701	LR: 0.100000
Training Epoch: 8 [7168/9494]	Loss: 0.6963	LR: 0.100000
Training Epoch: 8 [7424/9494]	Loss: 0.6733	LR: 0.100000
Training Epoch: 8 [7680/9494]	Loss: 0.6480	LR: 0.100000
Training Epoch: 8 [7936/9494]	Loss: 0.6640	LR: 0.100000
Training Epoch: 8 [8192/9494]	Loss: 0.6559	LR: 0.100000
Training Epoch: 8 [8448/9494]	Loss: 0.6697	LR: 0.100000
Training Epoch: 8 [8704/9494]	Loss: 0.6473	LR: 0.100000
Training Epoch: 8 [8960/9494]	Loss: 0.6714	LR: 0.100000
Training Epoch: 8 [9216/9494]	Loss: 0.6663	LR: 0.100000
Training Epoch: 8 [9472/9494]	Loss: 0.6939	LR: 0.100000
Training Epoch: 8 [9494/9494]	Loss: 0.7905	LR: 0.100000
Epoch 8 - Average Train Loss: 0.6673, Train Accuracy: 0.5979
Epoch 8 training time consumed: 138.25s
Evaluating Network.....
Test set: Epoch: 8, Average loss: 0.0029, Accuracy: 0.5797, Time consumed:8.27s
Training Epoch: 9 [256/9494]	Loss: 0.6863	LR: 0.100000
Training Epoch: 9 [512/9494]	Loss: 0.6650	LR: 0.100000
Training Epoch: 9 [768/9494]	Loss: 0.6922	LR: 0.100000
Training Epoch: 9 [1024/9494]	Loss: 0.7465	LR: 0.100000
Training Epoch: 9 [1280/9494]	Loss: 0.6795	LR: 0.100000
Training Epoch: 9 [1536/9494]	Loss: 0.7025	LR: 0.100000
Training Epoch: 9 [1792/9494]	Loss: 0.6811	LR: 0.100000
Training Epoch: 9 [2048/9494]	Loss: 0.6724	LR: 0.100000
Training Epoch: 9 [2304/9494]	Loss: 0.6845	LR: 0.100000
Training Epoch: 9 [2560/9494]	Loss: 0.6874	LR: 0.100000
Training Epoch: 9 [2816/9494]	Loss: 0.6901	LR: 0.100000
Training Epoch: 9 [3072/9494]	Loss: 0.6732	LR: 0.100000
Training Epoch: 9 [3328/9494]	Loss: 0.6681	LR: 0.100000
Training Epoch: 9 [3584/9494]	Loss: 0.6630	LR: 0.100000
Training Epoch: 9 [3840/9494]	Loss: 0.6567	LR: 0.100000
Training Epoch: 9 [4096/9494]	Loss: 0.6844	LR: 0.100000
Training Epoch: 9 [4352/9494]	Loss: 0.6889	LR: 0.100000
Training Epoch: 9 [4608/9494]	Loss: 0.6843	LR: 0.100000
Training Epoch: 9 [4864/9494]	Loss: 0.6927	LR: 0.100000
Training Epoch: 9 [5120/9494]	Loss: 0.6697	LR: 0.100000
Training Epoch: 9 [5376/9494]	Loss: 0.6625	LR: 0.100000
Training Epoch: 9 [5632/9494]	Loss: 0.6762	LR: 0.100000
Training Epoch: 9 [5888/9494]	Loss: 0.6666	LR: 0.100000
Training Epoch: 9 [6144/9494]	Loss: 0.6494	LR: 0.100000
Training Epoch: 9 [6400/9494]	Loss: 0.6808	LR: 0.100000
Training Epoch: 9 [6656/9494]	Loss: 0.6682	LR: 0.100000
Training Epoch: 9 [6912/9494]	Loss: 0.6785	LR: 0.100000
Training Epoch: 9 [7168/9494]	Loss: 0.6863	LR: 0.100000
Training Epoch: 9 [7424/9494]	Loss: 0.6595	LR: 0.100000
Training Epoch: 9 [7680/9494]	Loss: 0.6800	LR: 0.100000
Training Epoch: 9 [7936/9494]	Loss: 0.6552	LR: 0.100000
Training Epoch: 9 [8192/9494]	Loss: 0.6521	LR: 0.100000
Training Epoch: 9 [8448/9494]	Loss: 0.6747	LR: 0.100000
Training Epoch: 9 [8704/9494]	Loss: 0.6494	LR: 0.100000
Training Epoch: 9 [8960/9494]	Loss: 0.6617	LR: 0.100000
Training Epoch: 9 [9216/9494]	Loss: 0.6468	LR: 0.100000
Training Epoch: 9 [9472/9494]	Loss: 0.6613	LR: 0.100000
Training Epoch: 9 [9494/9494]	Loss: 0.6742	LR: 0.100000
Epoch 9 - Average Train Loss: 0.6751, Train Accuracy: 0.5803
Epoch 9 training time consumed: 138.51s
Evaluating Network.....
Test set: Epoch: 9, Average loss: 0.0030, Accuracy: 0.5632, Time consumed:8.19s
Training Epoch: 10 [256/9494]	Loss: 0.7315	LR: 0.020000
Training Epoch: 10 [512/9494]	Loss: 0.6783	LR: 0.020000
Training Epoch: 10 [768/9494]	Loss: 0.7337	LR: 0.020000
Training Epoch: 10 [1024/9494]	Loss: 0.7299	LR: 0.020000
Training Epoch: 10 [1280/9494]	Loss: 0.6555	LR: 0.020000
Training Epoch: 10 [1536/9494]	Loss: 0.6934	LR: 0.020000
Training Epoch: 10 [1792/9494]	Loss: 0.6748	LR: 0.020000
Training Epoch: 10 [2048/9494]	Loss: 0.6723	LR: 0.020000
Training Epoch: 10 [2304/9494]	Loss: 0.6715	LR: 0.020000
Training Epoch: 10 [2560/9494]	Loss: 0.6588	LR: 0.020000
Training Epoch: 10 [2816/9494]	Loss: 0.6554	LR: 0.020000
Training Epoch: 10 [3072/9494]	Loss: 0.6631	LR: 0.020000
Training Epoch: 10 [3328/9494]	Loss: 0.6453	LR: 0.020000
Training Epoch: 10 [3584/9494]	Loss: 0.6497	LR: 0.020000
Training Epoch: 10 [3840/9494]	Loss: 0.6664	LR: 0.020000
Training Epoch: 10 [4096/9494]	Loss: 0.6965	LR: 0.020000
Training Epoch: 10 [4352/9494]	Loss: 0.6641	LR: 0.020000
Training Epoch: 10 [4608/9494]	Loss: 0.6702	LR: 0.020000
Training Epoch: 10 [4864/9494]	Loss: 0.6709	LR: 0.020000
Training Epoch: 10 [5120/9494]	Loss: 0.6606	LR: 0.020000
Training Epoch: 10 [5376/9494]	Loss: 0.6641	LR: 0.020000
Training Epoch: 10 [5632/9494]	Loss: 0.6879	LR: 0.020000
Training Epoch: 10 [5888/9494]	Loss: 0.6758	LR: 0.020000
Training Epoch: 10 [6144/9494]	Loss: 0.6530	LR: 0.020000
Training Epoch: 10 [6400/9494]	Loss: 0.6736	LR: 0.020000
Training Epoch: 10 [6656/9494]	Loss: 0.6631	LR: 0.020000
Training Epoch: 10 [6912/9494]	Loss: 0.6604	LR: 0.020000
Training Epoch: 10 [7168/9494]	Loss: 0.6375	LR: 0.020000
Training Epoch: 10 [7424/9494]	Loss: 0.6529	LR: 0.020000
Training Epoch: 10 [7680/9494]	Loss: 0.6490	LR: 0.020000
Training Epoch: 10 [7936/9494]	Loss: 0.6522	LR: 0.020000
Training Epoch: 10 [8192/9494]	Loss: 0.6508	LR: 0.020000
Training Epoch: 10 [8448/9494]	Loss: 0.6890	LR: 0.020000
Training Epoch: 10 [8704/9494]	Loss: 0.6657	LR: 0.020000
Training Epoch: 10 [8960/9494]	Loss: 0.6596	LR: 0.020000
Training Epoch: 10 [9216/9494]	Loss: 0.6728	LR: 0.020000
Training Epoch: 10 [9472/9494]	Loss: 0.6760	LR: 0.020000
Training Epoch: 10 [9494/9494]	Loss: 0.8327	LR: 0.020000
Epoch 10 - Average Train Loss: 0.6713, Train Accuracy: 0.5922
Epoch 10 training time consumed: 137.95s
Evaluating Network.....
Test set: Epoch: 10, Average loss: 0.0029, Accuracy: 0.6266, Time consumed:8.11s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-10-best.pth
Training Epoch: 11 [256/9494]	Loss: 0.6699	LR: 0.020000
Training Epoch: 11 [512/9494]	Loss: 0.6634	LR: 0.020000
Training Epoch: 11 [768/9494]	Loss: 0.6719	LR: 0.020000
Training Epoch: 11 [1024/9494]	Loss: 0.6667	LR: 0.020000
Training Epoch: 11 [1280/9494]	Loss: 0.6682	LR: 0.020000
Training Epoch: 11 [1536/9494]	Loss: 0.6784	LR: 0.020000
Training Epoch: 11 [1792/9494]	Loss: 0.6534	LR: 0.020000
Training Epoch: 11 [2048/9494]	Loss: 0.6891	LR: 0.020000
Training Epoch: 11 [2304/9494]	Loss: 0.6565	LR: 0.020000
Training Epoch: 11 [2560/9494]	Loss: 0.6679	LR: 0.020000
Training Epoch: 11 [2816/9494]	Loss: 0.6615	LR: 0.020000
Training Epoch: 11 [3072/9494]	Loss: 0.6598	LR: 0.020000
Training Epoch: 11 [3328/9494]	Loss: 0.6698	LR: 0.020000
Training Epoch: 11 [3584/9494]	Loss: 0.6833	LR: 0.020000
Training Epoch: 11 [3840/9494]	Loss: 0.6782	LR: 0.020000
Training Epoch: 11 [4096/9494]	Loss: 0.6577	LR: 0.020000
Training Epoch: 11 [4352/9494]	Loss: 0.6613	LR: 0.020000
Training Epoch: 11 [4608/9494]	Loss: 0.6560	LR: 0.020000
Training Epoch: 11 [4864/9494]	Loss: 0.6660	LR: 0.020000
Training Epoch: 11 [5120/9494]	Loss: 0.6820	LR: 0.020000
Training Epoch: 11 [5376/9494]	Loss: 0.6446	LR: 0.020000
Training Epoch: 11 [5632/9494]	Loss: 0.6590	LR: 0.020000
Training Epoch: 11 [5888/9494]	Loss: 0.6586	LR: 0.020000
Training Epoch: 11 [6144/9494]	Loss: 0.6489	LR: 0.020000
Training Epoch: 11 [6400/9494]	Loss: 0.6495	LR: 0.020000
Training Epoch: 11 [6656/9494]	Loss: 0.6640	LR: 0.020000
Training Epoch: 11 [6912/9494]	Loss: 0.6796	LR: 0.020000
Training Epoch: 11 [7168/9494]	Loss: 0.6591	LR: 0.020000
Training Epoch: 11 [7424/9494]	Loss: 0.6581	LR: 0.020000
Training Epoch: 11 [7680/9494]	Loss: 0.6521	LR: 0.020000
Training Epoch: 11 [7936/9494]	Loss: 0.6620	LR: 0.020000
Training Epoch: 11 [8192/9494]	Loss: 0.6611	LR: 0.020000
Training Epoch: 11 [8448/9494]	Loss: 0.6648	LR: 0.020000
Training Epoch: 11 [8704/9494]	Loss: 0.6482	LR: 0.020000
Training Epoch: 11 [8960/9494]	Loss: 0.6457	LR: 0.020000
Training Epoch: 11 [9216/9494]	Loss: 0.6410	LR: 0.020000
Training Epoch: 11 [9472/9494]	Loss: 0.6693	LR: 0.020000
Training Epoch: 11 [9494/9494]	Loss: 0.6509	LR: 0.020000
Epoch 11 - Average Train Loss: 0.6629, Train Accuracy: 0.6134
Epoch 11 training time consumed: 138.03s
Evaluating Network.....
Test set: Epoch: 11, Average loss: 0.0029, Accuracy: 0.6189, Time consumed:8.01s
Training Epoch: 12 [256/9494]	Loss: 0.6579	LR: 0.020000
Training Epoch: 12 [512/9494]	Loss: 0.6210	LR: 0.020000
Training Epoch: 12 [768/9494]	Loss: 0.6591	LR: 0.020000
Training Epoch: 12 [1024/9494]	Loss: 0.6451	LR: 0.020000
Training Epoch: 12 [1280/9494]	Loss: 0.6592	LR: 0.020000
Training Epoch: 12 [1536/9494]	Loss: 0.6513	LR: 0.020000
Training Epoch: 12 [1792/9494]	Loss: 0.6639	LR: 0.020000
Training Epoch: 12 [2048/9494]	Loss: 0.6591	LR: 0.020000
Training Epoch: 12 [2304/9494]	Loss: 0.6636	LR: 0.020000
Training Epoch: 12 [2560/9494]	Loss: 0.6686	LR: 0.020000
Training Epoch: 12 [2816/9494]	Loss: 0.6611	LR: 0.020000
Training Epoch: 12 [3072/9494]	Loss: 0.6811	LR: 0.020000
Training Epoch: 12 [3328/9494]	Loss: 0.6666	LR: 0.020000
Training Epoch: 12 [3584/9494]	Loss: 0.6733	LR: 0.020000
Training Epoch: 12 [3840/9494]	Loss: 0.6319	LR: 0.020000
Training Epoch: 12 [4096/9494]	Loss: 0.6334	LR: 0.020000
Training Epoch: 12 [4352/9494]	Loss: 0.6947	LR: 0.020000
Training Epoch: 12 [4608/9494]	Loss: 0.6486	LR: 0.020000
Training Epoch: 12 [4864/9494]	Loss: 0.6428	LR: 0.020000
Training Epoch: 12 [5120/9494]	Loss: 0.6310	LR: 0.020000
Training Epoch: 12 [5376/9494]	Loss: 0.6486	LR: 0.020000
Training Epoch: 12 [5632/9494]	Loss: 0.6575	LR: 0.020000
Training Epoch: 12 [5888/9494]	Loss: 0.6742	LR: 0.020000
Training Epoch: 12 [6144/9494]	Loss: 0.6558	LR: 0.020000
Training Epoch: 12 [6400/9494]	Loss: 0.6560	LR: 0.020000
Training Epoch: 12 [6656/9494]	Loss: 0.6841	LR: 0.020000
Training Epoch: 12 [6912/9494]	Loss: 0.6577	LR: 0.020000
Training Epoch: 12 [7168/9494]	Loss: 0.6749	LR: 0.020000
Training Epoch: 12 [7424/9494]	Loss: 0.6428	LR: 0.020000
Training Epoch: 12 [7680/9494]	Loss: 0.6480	LR: 0.020000
Training Epoch: 12 [7936/9494]	Loss: 0.6635	LR: 0.020000
Training Epoch: 12 [8192/9494]	Loss: 0.6600	LR: 0.020000
Training Epoch: 12 [8448/9494]	Loss: 0.6553	LR: 0.020000
Training Epoch: 12 [8704/9494]	Loss: 0.6614	LR: 0.020000
Training Epoch: 12 [8960/9494]	Loss: 0.6360	LR: 0.020000
Training Epoch: 12 [9216/9494]	Loss: 0.6691	LR: 0.020000
Training Epoch: 12 [9472/9494]	Loss: 0.6377	LR: 0.020000
Training Epoch: 12 [9494/9494]	Loss: 0.6718	LR: 0.020000
Epoch 12 - Average Train Loss: 0.6567, Train Accuracy: 0.6146
Epoch 12 training time consumed: 137.67s
Evaluating Network.....
Test set: Epoch: 12, Average loss: 0.0029, Accuracy: 0.6189, Time consumed:8.15s
Training Epoch: 13 [256/9494]	Loss: 0.6609	LR: 0.020000
Training Epoch: 13 [512/9494]	Loss: 0.6490	LR: 0.020000
Training Epoch: 13 [768/9494]	Loss: 0.6676	LR: 0.020000
Training Epoch: 13 [1024/9494]	Loss: 0.6572	LR: 0.020000
Training Epoch: 13 [1280/9494]	Loss: 0.6473	LR: 0.020000
Training Epoch: 13 [1536/9494]	Loss: 0.6597	LR: 0.020000
Training Epoch: 13 [1792/9494]	Loss: 0.6631	LR: 0.020000
Training Epoch: 13 [2048/9494]	Loss: 0.6643	LR: 0.020000
Training Epoch: 13 [2304/9494]	Loss: 0.6485	LR: 0.020000
Training Epoch: 13 [2560/9494]	Loss: 0.6462	LR: 0.020000
Training Epoch: 13 [2816/9494]	Loss: 0.6528	LR: 0.020000
Training Epoch: 13 [3072/9494]	Loss: 0.6250	LR: 0.020000
Training Epoch: 13 [3328/9494]	Loss: 0.6712	LR: 0.020000
Training Epoch: 13 [3584/9494]	Loss: 0.6631	LR: 0.020000
Training Epoch: 13 [3840/9494]	Loss: 0.6658	LR: 0.020000
Training Epoch: 13 [4096/9494]	Loss: 0.6688	LR: 0.020000
Training Epoch: 13 [4352/9494]	Loss: 0.6317	LR: 0.020000
Training Epoch: 13 [4608/9494]	Loss: 0.6514	LR: 0.020000
Training Epoch: 13 [4864/9494]	Loss: 0.6472	LR: 0.020000
Training Epoch: 13 [5120/9494]	Loss: 0.6550	LR: 0.020000
Training Epoch: 13 [5376/9494]	Loss: 0.6551	LR: 0.020000
Training Epoch: 13 [5632/9494]	Loss: 0.6262	LR: 0.020000
Training Epoch: 13 [5888/9494]	Loss: 0.6621	LR: 0.020000
Training Epoch: 13 [6144/9494]	Loss: 0.6291	LR: 0.020000
Training Epoch: 13 [6400/9494]	Loss: 0.6514	LR: 0.020000
Training Epoch: 13 [6656/9494]	Loss: 0.6750	LR: 0.020000
Training Epoch: 13 [6912/9494]	Loss: 0.6489	LR: 0.020000
Training Epoch: 13 [7168/9494]	Loss: 0.6602	LR: 0.020000
Training Epoch: 13 [7424/9494]	Loss: 0.6451	LR: 0.020000
Training Epoch: 13 [7680/9494]	Loss: 0.6404	LR: 0.020000
Training Epoch: 13 [7936/9494]	Loss: 0.6640	LR: 0.020000
Training Epoch: 13 [8192/9494]	Loss: 0.6702	LR: 0.020000
Training Epoch: 13 [8448/9494]	Loss: 0.6639	LR: 0.020000
Training Epoch: 13 [8704/9494]	Loss: 0.6541	LR: 0.020000
Training Epoch: 13 [8960/9494]	Loss: 0.6519	LR: 0.020000
Training Epoch: 13 [9216/9494]	Loss: 0.6309	LR: 0.020000
Training Epoch: 13 [9472/9494]	Loss: 0.6723	LR: 0.020000
Training Epoch: 13 [9494/9494]	Loss: 0.6477	LR: 0.020000
Epoch 13 - Average Train Loss: 0.6539, Train Accuracy: 0.6166
Epoch 13 training time consumed: 137.42s
Evaluating Network.....
Test set: Epoch: 13, Average loss: 0.0029, Accuracy: 0.6126, Time consumed:8.20s
Training Epoch: 14 [256/9494]	Loss: 0.6450	LR: 0.020000
Training Epoch: 14 [512/9494]	Loss: 0.6407	LR: 0.020000
Training Epoch: 14 [768/9494]	Loss: 0.6697	LR: 0.020000
Training Epoch: 14 [1024/9494]	Loss: 0.6373	LR: 0.020000
Training Epoch: 14 [1280/9494]	Loss: 0.6534	LR: 0.020000
Training Epoch: 14 [1536/9494]	Loss: 0.6351	LR: 0.020000
Training Epoch: 14 [1792/9494]	Loss: 0.6753	LR: 0.020000
Training Epoch: 14 [2048/9494]	Loss: 0.6467	LR: 0.020000
Training Epoch: 14 [2304/9494]	Loss: 0.6404	LR: 0.020000
Training Epoch: 14 [2560/9494]	Loss: 0.6319	LR: 0.020000
Training Epoch: 14 [2816/9494]	Loss: 0.6397	LR: 0.020000
Training Epoch: 14 [3072/9494]	Loss: 0.6339	LR: 0.020000
Training Epoch: 14 [3328/9494]	Loss: 0.6518	LR: 0.020000
Training Epoch: 14 [3584/9494]	Loss: 0.6826	LR: 0.020000
Training Epoch: 14 [3840/9494]	Loss: 0.6739	LR: 0.020000
Training Epoch: 14 [4096/9494]	Loss: 0.6506	LR: 0.020000
Training Epoch: 14 [4352/9494]	Loss: 0.6432	LR: 0.020000
Training Epoch: 14 [4608/9494]	Loss: 0.6615	LR: 0.020000
Training Epoch: 14 [4864/9494]	Loss: 0.6869	LR: 0.020000
Training Epoch: 14 [5120/9494]	Loss: 0.6359	LR: 0.020000
Training Epoch: 14 [5376/9494]	Loss: 0.6805	LR: 0.020000
Training Epoch: 14 [5632/9494]	Loss: 0.6749	LR: 0.020000
Training Epoch: 14 [5888/9494]	Loss: 0.6605	LR: 0.020000
Training Epoch: 14 [6144/9494]	Loss: 0.6470	LR: 0.020000
Training Epoch: 14 [6400/9494]	Loss: 0.6562	LR: 0.020000
Training Epoch: 14 [6656/9494]	Loss: 0.6511	LR: 0.020000
Training Epoch: 14 [6912/9494]	Loss: 0.6590	LR: 0.020000
Training Epoch: 14 [7168/9494]	Loss: 0.6298	LR: 0.020000
Training Epoch: 14 [7424/9494]	Loss: 0.6549	LR: 0.020000
Training Epoch: 14 [7680/9494]	Loss: 0.6257	LR: 0.020000
Training Epoch: 14 [7936/9494]	Loss: 0.6394	LR: 0.020000
Training Epoch: 14 [8192/9494]	Loss: 0.6295	LR: 0.020000
Training Epoch: 14 [8448/9494]	Loss: 0.6610	LR: 0.020000
Training Epoch: 14 [8704/9494]	Loss: 0.6269	LR: 0.020000
Training Epoch: 14 [8960/9494]	Loss: 0.6489	LR: 0.020000
Training Epoch: 14 [9216/9494]	Loss: 0.6612	LR: 0.020000
Training Epoch: 14 [9472/9494]	Loss: 0.6676	LR: 0.020000
Training Epoch: 14 [9494/9494]	Loss: 0.6003	LR: 0.020000
Epoch 14 - Average Train Loss: 0.6515, Train Accuracy: 0.6269
Epoch 14 training time consumed: 137.69s
Evaluating Network.....
Test set: Epoch: 14, Average loss: 0.0028, Accuracy: 0.6596, Time consumed:8.12s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-14-best.pth
Training Epoch: 15 [256/9494]	Loss: 0.6436	LR: 0.020000
Training Epoch: 15 [512/9494]	Loss: 0.6049	LR: 0.020000
Training Epoch: 15 [768/9494]	Loss: 0.6361	LR: 0.020000
Training Epoch: 15 [1024/9494]	Loss: 0.6254	LR: 0.020000
Training Epoch: 15 [1280/9494]	Loss: 0.6931	LR: 0.020000
Training Epoch: 15 [1536/9494]	Loss: 0.6851	LR: 0.020000
Training Epoch: 15 [1792/9494]	Loss: 0.6486	LR: 0.020000
Training Epoch: 15 [2048/9494]	Loss: 0.6429	LR: 0.020000
Training Epoch: 15 [2304/9494]	Loss: 0.6411	LR: 0.020000
Training Epoch: 15 [2560/9494]	Loss: 0.6257	LR: 0.020000
Training Epoch: 15 [2816/9494]	Loss: 0.6328	LR: 0.020000
Training Epoch: 15 [3072/9494]	Loss: 0.6010	LR: 0.020000
Training Epoch: 15 [3328/9494]	Loss: 0.6353	LR: 0.020000
Training Epoch: 15 [3584/9494]	Loss: 0.6403	LR: 0.020000
Training Epoch: 15 [3840/9494]	Loss: 0.6371	LR: 0.020000
Training Epoch: 15 [4096/9494]	Loss: 0.6468	LR: 0.020000
Training Epoch: 15 [4352/9494]	Loss: 0.6344	LR: 0.020000
Training Epoch: 15 [4608/9494]	Loss: 0.6477	LR: 0.020000
Training Epoch: 15 [4864/9494]	Loss: 0.6314	LR: 0.020000
Training Epoch: 15 [5120/9494]	Loss: 0.5938	LR: 0.020000
Training Epoch: 15 [5376/9494]	Loss: 0.6345	LR: 0.020000
Training Epoch: 15 [5632/9494]	Loss: 0.6515	LR: 0.020000
Training Epoch: 15 [5888/9494]	Loss: 0.6177	LR: 0.020000
Training Epoch: 15 [6144/9494]	Loss: 0.6528	LR: 0.020000
Training Epoch: 15 [6400/9494]	Loss: 0.6307	LR: 0.020000
Training Epoch: 15 [6656/9494]	Loss: 0.6255	LR: 0.020000
Training Epoch: 15 [6912/9494]	Loss: 0.6077	LR: 0.020000
Training Epoch: 15 [7168/9494]	Loss: 0.6507	LR: 0.020000
Training Epoch: 15 [7424/9494]	Loss: 0.6411	LR: 0.020000
Training Epoch: 15 [7680/9494]	Loss: 0.6408	LR: 0.020000
Training Epoch: 15 [7936/9494]	Loss: 0.6295	LR: 0.020000
Training Epoch: 15 [8192/9494]	Loss: 0.6127	LR: 0.020000
Training Epoch: 15 [8448/9494]	Loss: 0.6200	LR: 0.020000
Training Epoch: 15 [8704/9494]	Loss: 0.6849	LR: 0.020000
Training Epoch: 15 [8960/9494]	Loss: 0.6630	LR: 0.020000
Training Epoch: 15 [9216/9494]	Loss: 0.6422	LR: 0.020000
Training Epoch: 15 [9472/9494]	Loss: 0.6178	LR: 0.020000
Training Epoch: 15 [9494/9494]	Loss: 0.5706	LR: 0.020000
Epoch 15 - Average Train Loss: 0.6369, Train Accuracy: 0.6478
Epoch 15 training time consumed: 137.50s
Evaluating Network.....
Test set: Epoch: 15, Average loss: 0.0028, Accuracy: 0.6349, Time consumed:8.05s
Training Epoch: 16 [256/9494]	Loss: 0.6745	LR: 0.020000
Training Epoch: 16 [512/9494]	Loss: 0.6497	LR: 0.020000
Training Epoch: 16 [768/9494]	Loss: 0.6534	LR: 0.020000
Training Epoch: 16 [1024/9494]	Loss: 0.6467	LR: 0.020000
Training Epoch: 16 [1280/9494]	Loss: 0.5831	LR: 0.020000
Training Epoch: 16 [1536/9494]	Loss: 0.6450	LR: 0.020000
Training Epoch: 16 [1792/9494]	Loss: 0.6509	LR: 0.020000
Training Epoch: 16 [2048/9494]	Loss: 0.6493	LR: 0.020000
Training Epoch: 16 [2304/9494]	Loss: 0.6458	LR: 0.020000
Training Epoch: 16 [2560/9494]	Loss: 0.6369	LR: 0.020000
Training Epoch: 16 [2816/9494]	Loss: 0.6460	LR: 0.020000
Training Epoch: 16 [3072/9494]	Loss: 0.6199	LR: 0.020000
Training Epoch: 16 [3328/9494]	Loss: 0.6217	LR: 0.020000
Training Epoch: 16 [3584/9494]	Loss: 0.6192	LR: 0.020000
Training Epoch: 16 [3840/9494]	Loss: 0.6092	LR: 0.020000
Training Epoch: 16 [4096/9494]	Loss: 0.5778	LR: 0.020000
Training Epoch: 16 [4352/9494]	Loss: 0.6352	LR: 0.020000
Training Epoch: 16 [4608/9494]	Loss: 0.6185	LR: 0.020000
Training Epoch: 16 [4864/9494]	Loss: 0.6253	LR: 0.020000
Training Epoch: 16 [5120/9494]	Loss: 0.6163	LR: 0.020000
Training Epoch: 16 [5376/9494]	Loss: 0.6284	LR: 0.020000
Training Epoch: 16 [5632/9494]	Loss: 0.6310	LR: 0.020000
Training Epoch: 16 [5888/9494]	Loss: 0.5906	LR: 0.020000
Training Epoch: 16 [6144/9494]	Loss: 0.6175	LR: 0.020000
Training Epoch: 16 [6400/9494]	Loss: 0.5932	LR: 0.020000
Training Epoch: 16 [6656/9494]	Loss: 0.6136	LR: 0.020000
Training Epoch: 16 [6912/9494]	Loss: 0.6103	LR: 0.020000
Training Epoch: 16 [7168/9494]	Loss: 0.5711	LR: 0.020000
Training Epoch: 16 [7424/9494]	Loss: 0.6187	LR: 0.020000
Training Epoch: 16 [7680/9494]	Loss: 0.6153	LR: 0.020000
Training Epoch: 16 [7936/9494]	Loss: 0.6452	LR: 0.020000
Training Epoch: 16 [8192/9494]	Loss: 0.6391	LR: 0.020000
Training Epoch: 16 [8448/9494]	Loss: 0.6643	LR: 0.020000
Training Epoch: 16 [8704/9494]	Loss: 0.6327	LR: 0.020000
Training Epoch: 16 [8960/9494]	Loss: 0.6245	LR: 0.020000
Training Epoch: 16 [9216/9494]	Loss: 0.6179	LR: 0.020000
Training Epoch: 16 [9472/9494]	Loss: 0.5990	LR: 0.020000
Training Epoch: 16 [9494/9494]	Loss: 0.5859	LR: 0.020000
Epoch 16 - Average Train Loss: 0.6252, Train Accuracy: 0.6554
Epoch 16 training time consumed: 137.58s
Evaluating Network.....
Test set: Epoch: 16, Average loss: 0.0028, Accuracy: 0.6620, Time consumed:8.00s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-16-best.pth
Training Epoch: 17 [256/9494]	Loss: 0.6368	LR: 0.020000
Training Epoch: 17 [512/9494]	Loss: 0.6800	LR: 0.020000
Training Epoch: 17 [768/9494]	Loss: 0.6044	LR: 0.020000
Training Epoch: 17 [1024/9494]	Loss: 0.6231	LR: 0.020000
Training Epoch: 17 [1280/9494]	Loss: 0.6169	LR: 0.020000
Training Epoch: 17 [1536/9494]	Loss: 0.6629	LR: 0.020000
Training Epoch: 17 [1792/9494]	Loss: 0.5881	LR: 0.020000
Training Epoch: 17 [2048/9494]	Loss: 0.6100	LR: 0.020000
Training Epoch: 17 [2304/9494]	Loss: 0.6620	LR: 0.020000
Training Epoch: 17 [2560/9494]	Loss: 0.5837	LR: 0.020000
Training Epoch: 17 [2816/9494]	Loss: 0.6567	LR: 0.020000
Training Epoch: 17 [3072/9494]	Loss: 0.6323	LR: 0.020000
Training Epoch: 17 [3328/9494]	Loss: 0.6312	LR: 0.020000
Training Epoch: 17 [3584/9494]	Loss: 0.6011	LR: 0.020000
Training Epoch: 17 [3840/9494]	Loss: 0.5993	LR: 0.020000
Training Epoch: 17 [4096/9494]	Loss: 0.6410	LR: 0.020000
Training Epoch: 17 [4352/9494]	Loss: 0.6208	LR: 0.020000
Training Epoch: 17 [4608/9494]	Loss: 0.5976	LR: 0.020000
Training Epoch: 17 [4864/9494]	Loss: 0.6168	LR: 0.020000
Training Epoch: 17 [5120/9494]	Loss: 0.5976	LR: 0.020000
Training Epoch: 17 [5376/9494]	Loss: 0.6101	LR: 0.020000
Training Epoch: 17 [5632/9494]	Loss: 0.6240	LR: 0.020000
Training Epoch: 17 [5888/9494]	Loss: 0.6316	LR: 0.020000
Training Epoch: 17 [6144/9494]	Loss: 0.6009	LR: 0.020000
Training Epoch: 17 [6400/9494]	Loss: 0.6135	LR: 0.020000
Training Epoch: 17 [6656/9494]	Loss: 0.6182	LR: 0.020000
Training Epoch: 17 [6912/9494]	Loss: 0.5913	LR: 0.020000
Training Epoch: 17 [7168/9494]	Loss: 0.5619	LR: 0.020000
Training Epoch: 17 [7424/9494]	Loss: 0.6327	LR: 0.020000
Training Epoch: 17 [7680/9494]	Loss: 0.5888	LR: 0.020000
Training Epoch: 17 [7936/9494]	Loss: 0.6128	LR: 0.020000
Training Epoch: 17 [8192/9494]	Loss: 0.5653	LR: 0.020000
Training Epoch: 17 [8448/9494]	Loss: 0.6196	LR: 0.020000
Training Epoch: 17 [8704/9494]	Loss: 0.6022	LR: 0.020000
Training Epoch: 17 [8960/9494]	Loss: 0.6134	LR: 0.020000
Training Epoch: 17 [9216/9494]	Loss: 0.5887	LR: 0.020000
Training Epoch: 17 [9472/9494]	Loss: 0.5676	LR: 0.020000
Training Epoch: 17 [9494/9494]	Loss: 0.7803	LR: 0.020000
Epoch 17 - Average Train Loss: 0.6140, Train Accuracy: 0.6642
Epoch 17 training time consumed: 137.55s
Evaluating Network.....
Test set: Epoch: 17, Average loss: 0.0033, Accuracy: 0.5729, Time consumed:8.32s
Training Epoch: 18 [256/9494]	Loss: 0.6742	LR: 0.020000
Training Epoch: 18 [512/9494]	Loss: 0.6324	LR: 0.020000
Training Epoch: 18 [768/9494]	Loss: 0.6232	LR: 0.020000
Training Epoch: 18 [1024/9494]	Loss: 0.6164	LR: 0.020000
Training Epoch: 18 [1280/9494]	Loss: 0.6329	LR: 0.020000
Training Epoch: 18 [1536/9494]	Loss: 0.6132	LR: 0.020000
Training Epoch: 18 [1792/9494]	Loss: 0.6266	LR: 0.020000
Training Epoch: 18 [2048/9494]	Loss: 0.5748	LR: 0.020000
Training Epoch: 18 [2304/9494]	Loss: 0.6310	LR: 0.020000
Training Epoch: 18 [2560/9494]	Loss: 0.6261	LR: 0.020000
Training Epoch: 18 [2816/9494]	Loss: 0.6011	LR: 0.020000
Training Epoch: 18 [3072/9494]	Loss: 0.6409	LR: 0.020000
Training Epoch: 18 [3328/9494]	Loss: 0.6160	LR: 0.020000
Training Epoch: 18 [3584/9494]	Loss: 0.5892	LR: 0.020000
Training Epoch: 18 [3840/9494]	Loss: 0.6406	LR: 0.020000
Training Epoch: 18 [4096/9494]	Loss: 0.6252	LR: 0.020000
Training Epoch: 18 [4352/9494]	Loss: 0.6256	LR: 0.020000
Training Epoch: 18 [4608/9494]	Loss: 0.6005	LR: 0.020000
Training Epoch: 18 [4864/9494]	Loss: 0.5755	LR: 0.020000
Training Epoch: 18 [5120/9494]	Loss: 0.5789	LR: 0.020000
Training Epoch: 18 [5376/9494]	Loss: 0.5981	LR: 0.020000
Training Epoch: 18 [5632/9494]	Loss: 0.5633	LR: 0.020000
Training Epoch: 18 [5888/9494]	Loss: 0.5863	LR: 0.020000
Training Epoch: 18 [6144/9494]	Loss: 0.5813	LR: 0.020000
Training Epoch: 18 [6400/9494]	Loss: 0.5860	LR: 0.020000
Training Epoch: 18 [6656/9494]	Loss: 0.5975	LR: 0.020000
Training Epoch: 18 [6912/9494]	Loss: 0.5636	LR: 0.020000
Training Epoch: 18 [7168/9494]	Loss: 0.5741	LR: 0.020000
Training Epoch: 18 [7424/9494]	Loss: 0.6080	LR: 0.020000
Training Epoch: 18 [7680/9494]	Loss: 0.5798	LR: 0.020000
Training Epoch: 18 [7936/9494]	Loss: 0.6206	LR: 0.020000
Training Epoch: 18 [8192/9494]	Loss: 0.5954	LR: 0.020000
Training Epoch: 18 [8448/9494]	Loss: 0.5794	LR: 0.020000
Training Epoch: 18 [8704/9494]	Loss: 0.6055	LR: 0.020000
Training Epoch: 18 [8960/9494]	Loss: 0.5598	LR: 0.020000
Training Epoch: 18 [9216/9494]	Loss: 0.5928	LR: 0.020000
Training Epoch: 18 [9472/9494]	Loss: 0.6019	LR: 0.020000
Training Epoch: 18 [9494/9494]	Loss: 0.5227	LR: 0.020000
Epoch 18 - Average Train Loss: 0.6035, Train Accuracy: 0.6763
Epoch 18 training time consumed: 137.36s
Evaluating Network.....
Test set: Epoch: 18, Average loss: 0.0025, Accuracy: 0.7201, Time consumed:8.13s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-18-best.pth
Training Epoch: 19 [256/9494]	Loss: 0.5930	LR: 0.020000
Training Epoch: 19 [512/9494]	Loss: 0.5826	LR: 0.020000
Training Epoch: 19 [768/9494]	Loss: 0.5969	LR: 0.020000
Training Epoch: 19 [1024/9494]	Loss: 0.5828	LR: 0.020000
Training Epoch: 19 [1280/9494]	Loss: 0.6079	LR: 0.020000
Training Epoch: 19 [1536/9494]	Loss: 0.5478	LR: 0.020000
Training Epoch: 19 [1792/9494]	Loss: 0.5736	LR: 0.020000
Training Epoch: 19 [2048/9494]	Loss: 0.5627	LR: 0.020000
Training Epoch: 19 [2304/9494]	Loss: 0.5899	LR: 0.020000
Training Epoch: 19 [2560/9494]	Loss: 0.5910	LR: 0.020000
Training Epoch: 19 [2816/9494]	Loss: 0.6128	LR: 0.020000
Training Epoch: 19 [3072/9494]	Loss: 0.5846	LR: 0.020000
Training Epoch: 19 [3328/9494]	Loss: 0.5430	LR: 0.020000
Training Epoch: 19 [3584/9494]	Loss: 0.5816	LR: 0.020000
Training Epoch: 19 [3840/9494]	Loss: 0.5706	LR: 0.020000
Training Epoch: 19 [4096/9494]	Loss: 0.6255	LR: 0.020000
Training Epoch: 19 [4352/9494]	Loss: 0.5951	LR: 0.020000
Training Epoch: 19 [4608/9494]	Loss: 0.5859	LR: 0.020000
Training Epoch: 19 [4864/9494]	Loss: 0.5654	LR: 0.020000
Training Epoch: 19 [5120/9494]	Loss: 0.5570	LR: 0.020000
Training Epoch: 19 [5376/9494]	Loss: 0.5280	LR: 0.020000
Training Epoch: 19 [5632/9494]	Loss: 0.5603	LR: 0.020000
Training Epoch: 19 [5888/9494]	Loss: 0.5483	LR: 0.020000
Training Epoch: 19 [6144/9494]	Loss: 0.5990	LR: 0.020000
Training Epoch: 19 [6400/9494]	Loss: 0.5232	LR: 0.020000
Training Epoch: 19 [6656/9494]	Loss: 0.5437	LR: 0.020000
Training Epoch: 19 [6912/9494]	Loss: 0.5979	LR: 0.020000
Training Epoch: 19 [7168/9494]	Loss: 0.5681	LR: 0.020000
Training Epoch: 19 [7424/9494]	Loss: 0.5506	LR: 0.020000
Training Epoch: 19 [7680/9494]	Loss: 0.5238	LR: 0.020000
Training Epoch: 19 [7936/9494]	Loss: 0.5605	LR: 0.020000
Training Epoch: 19 [8192/9494]	Loss: 0.5211	LR: 0.020000
Training Epoch: 19 [8448/9494]	Loss: 0.5571	LR: 0.020000
Training Epoch: 19 [8704/9494]	Loss: 0.5377	LR: 0.020000
Training Epoch: 19 [8960/9494]	Loss: 0.5259	LR: 0.020000
Training Epoch: 19 [9216/9494]	Loss: 0.5954	LR: 0.020000
Training Epoch: 19 [9472/9494]	Loss: 0.5278	LR: 0.020000
Training Epoch: 19 [9494/9494]	Loss: 0.5722	LR: 0.020000
Epoch 19 - Average Train Loss: 0.5681, Train Accuracy: 0.7081
Epoch 19 training time consumed: 137.93s
Evaluating Network.....
Test set: Epoch: 19, Average loss: 0.0035, Accuracy: 0.5884, Time consumed:8.09s
Training Epoch: 20 [256/9494]	Loss: 0.5871	LR: 0.004000
Training Epoch: 20 [512/9494]	Loss: 0.6702	LR: 0.004000
Training Epoch: 20 [768/9494]	Loss: 0.6478	LR: 0.004000
Training Epoch: 20 [1024/9494]	Loss: 0.5864	LR: 0.004000
Training Epoch: 20 [1280/9494]	Loss: 0.5087	LR: 0.004000
Training Epoch: 20 [1536/9494]	Loss: 0.6214	LR: 0.004000
Training Epoch: 20 [1792/9494]	Loss: 0.5550	LR: 0.004000
Training Epoch: 20 [2048/9494]	Loss: 0.5050	LR: 0.004000
Training Epoch: 20 [2304/9494]	Loss: 0.5388	LR: 0.004000
Training Epoch: 20 [2560/9494]	Loss: 0.5421	LR: 0.004000
Training Epoch: 20 [2816/9494]	Loss: 0.5546	LR: 0.004000
Training Epoch: 20 [3072/9494]	Loss: 0.5712	LR: 0.004000
Training Epoch: 20 [3328/9494]	Loss: 0.5013	LR: 0.004000
Training Epoch: 20 [3584/9494]	Loss: 0.5481	LR: 0.004000
Training Epoch: 20 [3840/9494]	Loss: 0.5546	LR: 0.004000
Training Epoch: 20 [4096/9494]	Loss: 0.5892	LR: 0.004000
Training Epoch: 20 [4352/9494]	Loss: 0.6025	LR: 0.004000
Training Epoch: 20 [4608/9494]	Loss: 0.5619	LR: 0.004000
Training Epoch: 20 [4864/9494]	Loss: 0.5642	LR: 0.004000
Training Epoch: 20 [5120/9494]	Loss: 0.5238	LR: 0.004000
Training Epoch: 20 [5376/9494]	Loss: 0.5249	LR: 0.004000
Training Epoch: 20 [5632/9494]	Loss: 0.5277	LR: 0.004000
Training Epoch: 20 [5888/9494]	Loss: 0.5234	LR: 0.004000
Training Epoch: 20 [6144/9494]	Loss: 0.5801	LR: 0.004000
Training Epoch: 20 [6400/9494]	Loss: 0.5134	LR: 0.004000
Training Epoch: 20 [6656/9494]	Loss: 0.5463	LR: 0.004000
Training Epoch: 20 [6912/9494]	Loss: 0.5166	LR: 0.004000
Training Epoch: 20 [7168/9494]	Loss: 0.5016	LR: 0.004000
Training Epoch: 20 [7424/9494]	Loss: 0.5491	LR: 0.004000
Training Epoch: 20 [7680/9494]	Loss: 0.5592	LR: 0.004000
Training Epoch: 20 [7936/9494]	Loss: 0.5249	LR: 0.004000
Training Epoch: 20 [8192/9494]	Loss: 0.5387	LR: 0.004000
Training Epoch: 20 [8448/9494]	Loss: 0.5125	LR: 0.004000
Training Epoch: 20 [8704/9494]	Loss: 0.5293	LR: 0.004000
Training Epoch: 20 [8960/9494]	Loss: 0.5395	LR: 0.004000
Training Epoch: 20 [9216/9494]	Loss: 0.5292	LR: 0.004000
Training Epoch: 20 [9472/9494]	Loss: 0.5141	LR: 0.004000
Training Epoch: 20 [9494/9494]	Loss: 0.5987	LR: 0.004000
Epoch 20 - Average Train Loss: 0.5505, Train Accuracy: 0.7239
Epoch 20 training time consumed: 137.70s
Evaluating Network.....
Test set: Epoch: 20, Average loss: 0.0025, Accuracy: 0.7259, Time consumed:8.20s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-20-best.pth
Training Epoch: 21 [256/9494]	Loss: 0.5787	LR: 0.004000
Training Epoch: 21 [512/9494]	Loss: 0.5431	LR: 0.004000
Training Epoch: 21 [768/9494]	Loss: 0.5286	LR: 0.004000
Training Epoch: 21 [1024/9494]	Loss: 0.5323	LR: 0.004000
Training Epoch: 21 [1280/9494]	Loss: 0.5731	LR: 0.004000
Training Epoch: 21 [1536/9494]	Loss: 0.5221	LR: 0.004000
Training Epoch: 21 [1792/9494]	Loss: 0.5119	LR: 0.004000
Training Epoch: 21 [2048/9494]	Loss: 0.5089	LR: 0.004000
Training Epoch: 21 [2304/9494]	Loss: 0.5358	LR: 0.004000
Training Epoch: 21 [2560/9494]	Loss: 0.5457	LR: 0.004000
Training Epoch: 21 [2816/9494]	Loss: 0.5856	LR: 0.004000
Training Epoch: 21 [3072/9494]	Loss: 0.5144	LR: 0.004000
Training Epoch: 21 [3328/9494]	Loss: 0.5351	LR: 0.004000
Training Epoch: 21 [3584/9494]	Loss: 0.5081	LR: 0.004000
Training Epoch: 21 [3840/9494]	Loss: 0.5273	LR: 0.004000
Training Epoch: 21 [4096/9494]	Loss: 0.5601	LR: 0.004000
Training Epoch: 21 [4352/9494]	Loss: 0.5178	LR: 0.004000
Training Epoch: 21 [4608/9494]	Loss: 0.5409	LR: 0.004000
Training Epoch: 21 [4864/9494]	Loss: 0.5054	LR: 0.004000
Training Epoch: 21 [5120/9494]	Loss: 0.5871	LR: 0.004000
Training Epoch: 21 [5376/9494]	Loss: 0.5473	LR: 0.004000
Training Epoch: 21 [5632/9494]	Loss: 0.5579	LR: 0.004000
Training Epoch: 21 [5888/9494]	Loss: 0.5095	LR: 0.004000
Training Epoch: 21 [6144/9494]	Loss: 0.5226	LR: 0.004000
Training Epoch: 21 [6400/9494]	Loss: 0.5210	LR: 0.004000
Training Epoch: 21 [6656/9494]	Loss: 0.4625	LR: 0.004000
Training Epoch: 21 [6912/9494]	Loss: 0.5223	LR: 0.004000
Training Epoch: 21 [7168/9494]	Loss: 0.5349	LR: 0.004000
Training Epoch: 21 [7424/9494]	Loss: 0.4919	LR: 0.004000
Training Epoch: 21 [7680/9494]	Loss: 0.4750	LR: 0.004000
Training Epoch: 21 [7936/9494]	Loss: 0.4947	LR: 0.004000
Training Epoch: 21 [8192/9494]	Loss: 0.5318	LR: 0.004000
Training Epoch: 21 [8448/9494]	Loss: 0.5189	LR: 0.004000
Training Epoch: 21 [8704/9494]	Loss: 0.5042	LR: 0.004000
Training Epoch: 21 [8960/9494]	Loss: 0.4628	LR: 0.004000
Training Epoch: 21 [9216/9494]	Loss: 0.4867	LR: 0.004000
Training Epoch: 21 [9472/9494]	Loss: 0.4769	LR: 0.004000
Training Epoch: 21 [9494/9494]	Loss: 0.5568	LR: 0.004000
Epoch 21 - Average Train Loss: 0.5239, Train Accuracy: 0.7367
Epoch 21 training time consumed: 138.09s
Evaluating Network.....
Test set: Epoch: 21, Average loss: 0.0023, Accuracy: 0.7414, Time consumed:8.08s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-21-best.pth
Training Epoch: 22 [256/9494]	Loss: 0.5031	LR: 0.004000
Training Epoch: 22 [512/9494]	Loss: 0.5419	LR: 0.004000
Training Epoch: 22 [768/9494]	Loss: 0.5406	LR: 0.004000
Training Epoch: 22 [1024/9494]	Loss: 0.5940	LR: 0.004000
Training Epoch: 22 [1280/9494]	Loss: 0.5103	LR: 0.004000
Training Epoch: 22 [1536/9494]	Loss: 0.5262	LR: 0.004000
Training Epoch: 22 [1792/9494]	Loss: 0.5210	LR: 0.004000
Training Epoch: 22 [2048/9494]	Loss: 0.4757	LR: 0.004000
Training Epoch: 22 [2304/9494]	Loss: 0.4833	LR: 0.004000
Training Epoch: 22 [2560/9494]	Loss: 0.5584	LR: 0.004000
Training Epoch: 22 [2816/9494]	Loss: 0.5271	LR: 0.004000
Training Epoch: 22 [3072/9494]	Loss: 0.5567	LR: 0.004000
Training Epoch: 22 [3328/9494]	Loss: 0.5733	LR: 0.004000
Training Epoch: 22 [3584/9494]	Loss: 0.5234	LR: 0.004000
Training Epoch: 22 [3840/9494]	Loss: 0.5058	LR: 0.004000
Training Epoch: 22 [4096/9494]	Loss: 0.5294	LR: 0.004000
Training Epoch: 22 [4352/9494]	Loss: 0.5480	LR: 0.004000
Training Epoch: 22 [4608/9494]	Loss: 0.5678	LR: 0.004000
Training Epoch: 22 [4864/9494]	Loss: 0.5171	LR: 0.004000
Training Epoch: 22 [5120/9494]	Loss: 0.5230	LR: 0.004000
Training Epoch: 22 [5376/9494]	Loss: 0.4682	LR: 0.004000
Training Epoch: 22 [5632/9494]	Loss: 0.5298	LR: 0.004000
Training Epoch: 22 [5888/9494]	Loss: 0.4902	LR: 0.004000
Training Epoch: 22 [6144/9494]	Loss: 0.5109	LR: 0.004000
Training Epoch: 22 [6400/9494]	Loss: 0.4972	LR: 0.004000
Training Epoch: 22 [6656/9494]	Loss: 0.5049	LR: 0.004000
Training Epoch: 22 [6912/9494]	Loss: 0.4993	LR: 0.004000
Training Epoch: 22 [7168/9494]	Loss: 0.4997	LR: 0.004000
Training Epoch: 22 [7424/9494]	Loss: 0.4901	LR: 0.004000
Training Epoch: 22 [7680/9494]	Loss: 0.4924	LR: 0.004000
Training Epoch: 22 [7936/9494]	Loss: 0.4962	LR: 0.004000
Training Epoch: 22 [8192/9494]	Loss: 0.4875	LR: 0.004000
Training Epoch: 22 [8448/9494]	Loss: 0.4578	LR: 0.004000
Training Epoch: 22 [8704/9494]	Loss: 0.5139	LR: 0.004000
Training Epoch: 22 [8960/9494]	Loss: 0.4767	LR: 0.004000
Training Epoch: 22 [9216/9494]	Loss: 0.4619	LR: 0.004000
Training Epoch: 22 [9472/9494]	Loss: 0.5313	LR: 0.004000
Training Epoch: 22 [9494/9494]	Loss: 0.3794	LR: 0.004000
Epoch 22 - Average Train Loss: 0.5141, Train Accuracy: 0.7477
Epoch 22 training time consumed: 137.40s
Evaluating Network.....
Test set: Epoch: 22, Average loss: 0.0024, Accuracy: 0.7308, Time consumed:8.05s
Training Epoch: 23 [256/9494]	Loss: 0.4887	LR: 0.004000
Training Epoch: 23 [512/9494]	Loss: 0.5183	LR: 0.004000
Training Epoch: 23 [768/9494]	Loss: 0.4877	LR: 0.004000
Training Epoch: 23 [1024/9494]	Loss: 0.5139	LR: 0.004000
Training Epoch: 23 [1280/9494]	Loss: 0.4601	LR: 0.004000
Training Epoch: 23 [1536/9494]	Loss: 0.5304	LR: 0.004000
Training Epoch: 23 [1792/9494]	Loss: 0.4979	LR: 0.004000
Training Epoch: 23 [2048/9494]	Loss: 0.4770	LR: 0.004000
Training Epoch: 23 [2304/9494]	Loss: 0.4960	LR: 0.004000
Training Epoch: 23 [2560/9494]	Loss: 0.5432	LR: 0.004000
Training Epoch: 23 [2816/9494]	Loss: 0.4384	LR: 0.004000
Training Epoch: 23 [3072/9494]	Loss: 0.4980	LR: 0.004000
Training Epoch: 23 [3328/9494]	Loss: 0.5012	LR: 0.004000
Training Epoch: 23 [3584/9494]	Loss: 0.4721	LR: 0.004000
Training Epoch: 23 [3840/9494]	Loss: 0.5030	LR: 0.004000
Training Epoch: 23 [4096/9494]	Loss: 0.4750	LR: 0.004000
Training Epoch: 23 [4352/9494]	Loss: 0.4851	LR: 0.004000
Training Epoch: 23 [4608/9494]	Loss: 0.4457	LR: 0.004000
Training Epoch: 23 [4864/9494]	Loss: 0.5033	LR: 0.004000
Training Epoch: 23 [5120/9494]	Loss: 0.4931	LR: 0.004000
Training Epoch: 23 [5376/9494]	Loss: 0.4187	LR: 0.004000
Training Epoch: 23 [5632/9494]	Loss: 0.4444	LR: 0.004000
Training Epoch: 23 [5888/9494]	Loss: 0.5012	LR: 0.004000
Training Epoch: 23 [6144/9494]	Loss: 0.4770	LR: 0.004000
Training Epoch: 23 [6400/9494]	Loss: 0.4196	LR: 0.004000
Training Epoch: 23 [6656/9494]	Loss: 0.4567	LR: 0.004000
Training Epoch: 23 [6912/9494]	Loss: 0.4711	LR: 0.004000
Training Epoch: 23 [7168/9494]	Loss: 0.4525	LR: 0.004000
Training Epoch: 23 [7424/9494]	Loss: 0.4611	LR: 0.004000
Training Epoch: 23 [7680/9494]	Loss: 0.4520	LR: 0.004000
Training Epoch: 23 [7936/9494]	Loss: 0.4282	LR: 0.004000
Training Epoch: 23 [8192/9494]	Loss: 0.4281	LR: 0.004000
Training Epoch: 23 [8448/9494]	Loss: 0.4840	LR: 0.004000
Training Epoch: 23 [8704/9494]	Loss: 0.4064	LR: 0.004000
Training Epoch: 23 [8960/9494]	Loss: 0.4572	LR: 0.004000
Training Epoch: 23 [9216/9494]	Loss: 0.4609	LR: 0.004000
Training Epoch: 23 [9472/9494]	Loss: 0.4475	LR: 0.004000
Training Epoch: 23 [9494/9494]	Loss: 0.6711	LR: 0.004000
Epoch 23 - Average Train Loss: 0.4733, Train Accuracy: 0.7769
Epoch 23 training time consumed: 137.66s
Evaluating Network.....
Test set: Epoch: 23, Average loss: 0.0023, Accuracy: 0.7772, Time consumed:8.22s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-23-best.pth
Training Epoch: 24 [256/9494]	Loss: 0.4934	LR: 0.004000
Training Epoch: 24 [512/9494]	Loss: 0.5102	LR: 0.004000
Training Epoch: 24 [768/9494]	Loss: 0.4567	LR: 0.004000
Training Epoch: 24 [1024/9494]	Loss: 0.4280	LR: 0.004000
Training Epoch: 24 [1280/9494]	Loss: 0.4309	LR: 0.004000
Training Epoch: 24 [1536/9494]	Loss: 0.4405	LR: 0.004000
Training Epoch: 24 [1792/9494]	Loss: 0.4336	LR: 0.004000
Training Epoch: 24 [2048/9494]	Loss: 0.4603	LR: 0.004000
Training Epoch: 24 [2304/9494]	Loss: 0.4928	LR: 0.004000
Training Epoch: 24 [2560/9494]	Loss: 0.5103	LR: 0.004000
Training Epoch: 24 [2816/9494]	Loss: 0.4265	LR: 0.004000
Training Epoch: 24 [3072/9494]	Loss: 0.4562	LR: 0.004000
Training Epoch: 24 [3328/9494]	Loss: 0.4786	LR: 0.004000
Training Epoch: 24 [3584/9494]	Loss: 0.3871	LR: 0.004000
Training Epoch: 24 [3840/9494]	Loss: 0.4662	LR: 0.004000
Training Epoch: 24 [4096/9494]	Loss: 0.4452	LR: 0.004000
Training Epoch: 24 [4352/9494]	Loss: 0.4301	LR: 0.004000
Training Epoch: 24 [4608/9494]	Loss: 0.3818	LR: 0.004000
Training Epoch: 24 [4864/9494]	Loss: 0.4519	LR: 0.004000
Training Epoch: 24 [5120/9494]	Loss: 0.4490	LR: 0.004000
Training Epoch: 24 [5376/9494]	Loss: 0.4308	LR: 0.004000
Training Epoch: 24 [5632/9494]	Loss: 0.4137	LR: 0.004000
Training Epoch: 24 [5888/9494]	Loss: 0.4463	LR: 0.004000
Training Epoch: 24 [6144/9494]	Loss: 0.5351	LR: 0.004000
Training Epoch: 24 [6400/9494]	Loss: 0.4819	LR: 0.004000
Training Epoch: 24 [6656/9494]	Loss: 0.4216	LR: 0.004000
Training Epoch: 24 [6912/9494]	Loss: 0.4622	LR: 0.004000
Training Epoch: 24 [7168/9494]	Loss: 0.4951	LR: 0.004000
Training Epoch: 24 [7424/9494]	Loss: 0.4752	LR: 0.004000
Training Epoch: 24 [7680/9494]	Loss: 0.5275	LR: 0.004000
Training Epoch: 24 [7936/9494]	Loss: 0.4704	LR: 0.004000
Training Epoch: 24 [8192/9494]	Loss: 0.4269	LR: 0.004000
Training Epoch: 24 [8448/9494]	Loss: 0.4640	LR: 0.004000
Training Epoch: 24 [8704/9494]	Loss: 0.4412	LR: 0.004000
Training Epoch: 24 [8960/9494]	Loss: 0.4004	LR: 0.004000
Training Epoch: 24 [9216/9494]	Loss: 0.4662	LR: 0.004000
Training Epoch: 24 [9472/9494]	Loss: 0.4697	LR: 0.004000
Training Epoch: 24 [9494/9494]	Loss: 0.5836	LR: 0.004000
Epoch 24 - Average Train Loss: 0.4559, Train Accuracy: 0.7885
Epoch 24 training time consumed: 138.08s
Evaluating Network.....
Test set: Epoch: 24, Average loss: 0.0022, Accuracy: 0.7579, Time consumed:8.16s
Training Epoch: 25 [256/9494]	Loss: 0.5058	LR: 0.004000
Training Epoch: 25 [512/9494]	Loss: 0.4558	LR: 0.004000
Training Epoch: 25 [768/9494]	Loss: 0.4737	LR: 0.004000
Training Epoch: 25 [1024/9494]	Loss: 0.4025	LR: 0.004000
Training Epoch: 25 [1280/9494]	Loss: 0.4296	LR: 0.004000
Training Epoch: 25 [1536/9494]	Loss: 0.4570	LR: 0.004000
Training Epoch: 25 [1792/9494]	Loss: 0.3940	LR: 0.004000
Training Epoch: 25 [2048/9494]	Loss: 0.4252	LR: 0.004000
Training Epoch: 25 [2304/9494]	Loss: 0.4493	LR: 0.004000
Training Epoch: 25 [2560/9494]	Loss: 0.4169	LR: 0.004000
Training Epoch: 25 [2816/9494]	Loss: 0.4105	LR: 0.004000
Training Epoch: 25 [3072/9494]	Loss: 0.4689	LR: 0.004000
Training Epoch: 25 [3328/9494]	Loss: 0.4615	LR: 0.004000
Training Epoch: 25 [3584/9494]	Loss: 0.4410	LR: 0.004000
Training Epoch: 25 [3840/9494]	Loss: 0.4388	LR: 0.004000
Training Epoch: 25 [4096/9494]	Loss: 0.3981	LR: 0.004000
Training Epoch: 25 [4352/9494]	Loss: 0.4856	LR: 0.004000
Training Epoch: 25 [4608/9494]	Loss: 0.4212	LR: 0.004000
Training Epoch: 25 [4864/9494]	Loss: 0.4389	LR: 0.004000
Training Epoch: 25 [5120/9494]	Loss: 0.4078	LR: 0.004000
Training Epoch: 25 [5376/9494]	Loss: 0.4139	LR: 0.004000
Training Epoch: 25 [5632/9494]	Loss: 0.3832	LR: 0.004000
Training Epoch: 25 [5888/9494]	Loss: 0.4165	LR: 0.004000
Training Epoch: 25 [6144/9494]	Loss: 0.4690	LR: 0.004000
Training Epoch: 25 [6400/9494]	Loss: 0.3908	LR: 0.004000
Training Epoch: 25 [6656/9494]	Loss: 0.4340	LR: 0.004000
Training Epoch: 25 [6912/9494]	Loss: 0.3951	LR: 0.004000
Training Epoch: 25 [7168/9494]	Loss: 0.4088	LR: 0.004000
Training Epoch: 25 [7424/9494]	Loss: 0.3273	LR: 0.004000
Training Epoch: 25 [7680/9494]	Loss: 0.4327	LR: 0.004000
Training Epoch: 25 [7936/9494]	Loss: 0.4127	LR: 0.004000
Training Epoch: 25 [8192/9494]	Loss: 0.3861	LR: 0.004000
Training Epoch: 25 [8448/9494]	Loss: 0.4419	LR: 0.004000
Training Epoch: 25 [8704/9494]	Loss: 0.3999	LR: 0.004000
Training Epoch: 25 [8960/9494]	Loss: 0.3944	LR: 0.004000
Training Epoch: 25 [9216/9494]	Loss: 0.4199	LR: 0.004000
Training Epoch: 25 [9472/9494]	Loss: 0.3912	LR: 0.004000
Training Epoch: 25 [9494/9494]	Loss: 0.4856	LR: 0.004000
Epoch 25 - Average Train Loss: 0.4245, Train Accuracy: 0.8056
Epoch 25 training time consumed: 137.55s
Evaluating Network.....
Test set: Epoch: 25, Average loss: 0.0018, Accuracy: 0.8291, Time consumed:8.20s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-25-best.pth
Training Epoch: 26 [256/9494]	Loss: 0.4578	LR: 0.004000
Training Epoch: 26 [512/9494]	Loss: 0.4564	LR: 0.004000
Training Epoch: 26 [768/9494]	Loss: 0.5034	LR: 0.004000
Training Epoch: 26 [1024/9494]	Loss: 0.3905	LR: 0.004000
Training Epoch: 26 [1280/9494]	Loss: 0.4544	LR: 0.004000
Training Epoch: 26 [1536/9494]	Loss: 0.4403	LR: 0.004000
Training Epoch: 26 [1792/9494]	Loss: 0.4127	LR: 0.004000
Training Epoch: 26 [2048/9494]	Loss: 0.4357	LR: 0.004000
Training Epoch: 26 [2304/9494]	Loss: 0.4210	LR: 0.004000
Training Epoch: 26 [2560/9494]	Loss: 0.3882	LR: 0.004000
Training Epoch: 26 [2816/9494]	Loss: 0.4107	LR: 0.004000
Training Epoch: 26 [3072/9494]	Loss: 0.3877	LR: 0.004000
Training Epoch: 26 [3328/9494]	Loss: 0.3343	LR: 0.004000
Training Epoch: 26 [3584/9494]	Loss: 0.3786	LR: 0.004000
Training Epoch: 26 [3840/9494]	Loss: 0.3709	LR: 0.004000
Training Epoch: 26 [4096/9494]	Loss: 0.4005	LR: 0.004000
Training Epoch: 26 [4352/9494]	Loss: 0.4146	LR: 0.004000
Training Epoch: 26 [4608/9494]	Loss: 0.3952	LR: 0.004000
Training Epoch: 26 [4864/9494]	Loss: 0.4137	LR: 0.004000
Training Epoch: 26 [5120/9494]	Loss: 0.4514	LR: 0.004000
Training Epoch: 26 [5376/9494]	Loss: 0.4123	LR: 0.004000
Training Epoch: 26 [5632/9494]	Loss: 0.4079	LR: 0.004000
Training Epoch: 26 [5888/9494]	Loss: 0.3838	LR: 0.004000
Training Epoch: 26 [6144/9494]	Loss: 0.4025	LR: 0.004000
Training Epoch: 26 [6400/9494]	Loss: 0.3573	LR: 0.004000
Training Epoch: 26 [6656/9494]	Loss: 0.3932	LR: 0.004000
Training Epoch: 26 [6912/9494]	Loss: 0.3670	LR: 0.004000
Training Epoch: 26 [7168/9494]	Loss: 0.4401	LR: 0.004000
Training Epoch: 26 [7424/9494]	Loss: 0.4377	LR: 0.004000
Training Epoch: 26 [7680/9494]	Loss: 0.4099	LR: 0.004000
Training Epoch: 26 [7936/9494]	Loss: 0.3929	LR: 0.004000
Training Epoch: 26 [8192/9494]	Loss: 0.3922	LR: 0.004000
Training Epoch: 26 [8448/9494]	Loss: 0.3678	LR: 0.004000
Training Epoch: 26 [8704/9494]	Loss: 0.4613	LR: 0.004000
Training Epoch: 26 [8960/9494]	Loss: 0.3824	LR: 0.004000
Training Epoch: 26 [9216/9494]	Loss: 0.4166	LR: 0.004000
Training Epoch: 26 [9472/9494]	Loss: 0.4403	LR: 0.004000
Training Epoch: 26 [9494/9494]	Loss: 0.4347	LR: 0.004000
Epoch 26 - Average Train Loss: 0.4104, Train Accuracy: 0.8147
Epoch 26 training time consumed: 137.57s
Evaluating Network.....
Test set: Epoch: 26, Average loss: 0.0020, Accuracy: 0.8039, Time consumed:8.22s
Training Epoch: 27 [256/9494]	Loss: 0.3859	LR: 0.004000
Training Epoch: 27 [512/9494]	Loss: 0.3979	LR: 0.004000
Training Epoch: 27 [768/9494]	Loss: 0.4302	LR: 0.004000
Training Epoch: 27 [1024/9494]	Loss: 0.3574	LR: 0.004000
Training Epoch: 27 [1280/9494]	Loss: 0.4204	LR: 0.004000
Training Epoch: 27 [1536/9494]	Loss: 0.4384	LR: 0.004000
Training Epoch: 27 [1792/9494]	Loss: 0.3852	LR: 0.004000
Training Epoch: 27 [2048/9494]	Loss: 0.4067	LR: 0.004000
Training Epoch: 27 [2304/9494]	Loss: 0.4282	LR: 0.004000
Training Epoch: 27 [2560/9494]	Loss: 0.3988	LR: 0.004000
Training Epoch: 27 [2816/9494]	Loss: 0.4449	LR: 0.004000
Training Epoch: 27 [3072/9494]	Loss: 0.3600	LR: 0.004000
Training Epoch: 27 [3328/9494]	Loss: 0.4234	LR: 0.004000
Training Epoch: 27 [3584/9494]	Loss: 0.4210	LR: 0.004000
Training Epoch: 27 [3840/9494]	Loss: 0.4023	LR: 0.004000
Training Epoch: 27 [4096/9494]	Loss: 0.4196	LR: 0.004000
Training Epoch: 27 [4352/9494]	Loss: 0.3760	LR: 0.004000
Training Epoch: 27 [4608/9494]	Loss: 0.3971	LR: 0.004000
Training Epoch: 27 [4864/9494]	Loss: 0.3790	LR: 0.004000
Training Epoch: 27 [5120/9494]	Loss: 0.3604	LR: 0.004000
Training Epoch: 27 [5376/9494]	Loss: 0.4291	LR: 0.004000
Training Epoch: 27 [5632/9494]	Loss: 0.3855	LR: 0.004000
Training Epoch: 27 [5888/9494]	Loss: 0.4016	LR: 0.004000
Training Epoch: 27 [6144/9494]	Loss: 0.3455	LR: 0.004000
Training Epoch: 27 [6400/9494]	Loss: 0.4076	LR: 0.004000
Training Epoch: 27 [6656/9494]	Loss: 0.4281	LR: 0.004000
Training Epoch: 27 [6912/9494]	Loss: 0.3593	LR: 0.004000
Training Epoch: 27 [7168/9494]	Loss: 0.3357	LR: 0.004000
Training Epoch: 27 [7424/9494]	Loss: 0.3958	LR: 0.004000
Training Epoch: 27 [7680/9494]	Loss: 0.3508	LR: 0.004000
Training Epoch: 27 [7936/9494]	Loss: 0.4004	LR: 0.004000
Training Epoch: 27 [8192/9494]	Loss: 0.4320	LR: 0.004000
Training Epoch: 27 [8448/9494]	Loss: 0.3824	LR: 0.004000
Training Epoch: 27 [8704/9494]	Loss: 0.3714	LR: 0.004000
Training Epoch: 27 [8960/9494]	Loss: 0.3727	LR: 0.004000
Training Epoch: 27 [9216/9494]	Loss: 0.3894	LR: 0.004000
Training Epoch: 27 [9472/9494]	Loss: 0.4252	LR: 0.004000
Training Epoch: 27 [9494/9494]	Loss: 0.2519	LR: 0.004000
Epoch 27 - Average Train Loss: 0.3955, Train Accuracy: 0.8240
Epoch 27 training time consumed: 137.89s
Evaluating Network.....
Test set: Epoch: 27, Average loss: 0.0020, Accuracy: 0.8068, Time consumed:8.06s
Training Epoch: 28 [256/9494]	Loss: 0.3585	LR: 0.004000
Training Epoch: 28 [512/9494]	Loss: 0.3528	LR: 0.004000
Training Epoch: 28 [768/9494]	Loss: 0.4065	LR: 0.004000
Training Epoch: 28 [1024/9494]	Loss: 0.3734	LR: 0.004000
Training Epoch: 28 [1280/9494]	Loss: 0.3236	LR: 0.004000
Training Epoch: 28 [1536/9494]	Loss: 0.3840	LR: 0.004000
Training Epoch: 28 [1792/9494]	Loss: 0.3700	LR: 0.004000
Training Epoch: 28 [2048/9494]	Loss: 0.4039	LR: 0.004000
Training Epoch: 28 [2304/9494]	Loss: 0.3271	LR: 0.004000
Training Epoch: 28 [2560/9494]	Loss: 0.3811	LR: 0.004000
Training Epoch: 28 [2816/9494]	Loss: 0.4187	LR: 0.004000
Training Epoch: 28 [3072/9494]	Loss: 0.3540	LR: 0.004000
Training Epoch: 28 [3328/9494]	Loss: 0.3630	LR: 0.004000
Training Epoch: 28 [3584/9494]	Loss: 0.3685	LR: 0.004000
Training Epoch: 28 [3840/9494]	Loss: 0.4122	LR: 0.004000
Training Epoch: 28 [4096/9494]	Loss: 0.3756	LR: 0.004000
Training Epoch: 28 [4352/9494]	Loss: 0.3697	LR: 0.004000
Training Epoch: 28 [4608/9494]	Loss: 0.3512	LR: 0.004000
Training Epoch: 28 [4864/9494]	Loss: 0.3785	LR: 0.004000
Training Epoch: 28 [5120/9494]	Loss: 0.4121	LR: 0.004000
Training Epoch: 28 [5376/9494]	Loss: 0.3411	LR: 0.004000
Training Epoch: 28 [5632/9494]	Loss: 0.3252	LR: 0.004000
Training Epoch: 28 [5888/9494]	Loss: 0.3592	LR: 0.004000
Training Epoch: 28 [6144/9494]	Loss: 0.3144	LR: 0.004000
Training Epoch: 28 [6400/9494]	Loss: 0.3916	LR: 0.004000
Training Epoch: 28 [6656/9494]	Loss: 0.4282	LR: 0.004000
Training Epoch: 28 [6912/9494]	Loss: 0.3638	LR: 0.004000
Training Epoch: 28 [7168/9494]	Loss: 0.3633	LR: 0.004000
Training Epoch: 28 [7424/9494]	Loss: 0.3048	LR: 0.004000
Training Epoch: 28 [7680/9494]	Loss: 0.3559	LR: 0.004000
Training Epoch: 28 [7936/9494]	Loss: 0.3560	LR: 0.004000
Training Epoch: 28 [8192/9494]	Loss: 0.3372	LR: 0.004000
Training Epoch: 28 [8448/9494]	Loss: 0.3498	LR: 0.004000
Training Epoch: 28 [8704/9494]	Loss: 0.3711	LR: 0.004000
Training Epoch: 28 [8960/9494]	Loss: 0.3461	LR: 0.004000
Training Epoch: 28 [9216/9494]	Loss: 0.3482	LR: 0.004000
Training Epoch: 28 [9472/9494]	Loss: 0.3860	LR: 0.004000
Training Epoch: 28 [9494/9494]	Loss: 0.3279	LR: 0.004000
Epoch 28 - Average Train Loss: 0.3655, Train Accuracy: 0.8388
Epoch 28 training time consumed: 137.91s
Evaluating Network.....
Test set: Epoch: 28, Average loss: 0.0014, Accuracy: 0.8596, Time consumed:8.15s
Saving weights file to checkpoint/retrain/ResNet18/Friday_25_July_2025_10h_03m_04s/ResNet18-MUCAC-seed9-ret100-28-best.pth
Training Epoch: 29 [256/9494]	Loss: 0.2983	LR: 0.004000
Training Epoch: 29 [512/9494]	Loss: 0.3764	LR: 0.004000
Training Epoch: 29 [768/9494]	Loss: 0.3516	LR: 0.004000
Training Epoch: 29 [1024/9494]	Loss: 0.3203	LR: 0.004000
Training Epoch: 29 [1280/9494]	Loss: 0.3921	LR: 0.004000
Training Epoch: 29 [1536/9494]	Loss: 0.3762	LR: 0.004000
Training Epoch: 29 [1792/9494]	Loss: 0.3599	LR: 0.004000
Training Epoch: 29 [2048/9494]	Loss: 0.3419	LR: 0.004000
Training Epoch: 29 [2304/9494]	Loss: 0.3197	LR: 0.004000
Training Epoch: 29 [2560/9494]	Loss: 0.4167	LR: 0.004000
Training Epoch: 29 [2816/9494]	Loss: 0.4082	LR: 0.004000
Training Epoch: 29 [3072/9494]	Loss: 0.3419	LR: 0.004000
Training Epoch: 29 [3328/9494]	Loss: 0.3483	LR: 0.004000
Training Epoch: 29 [3584/9494]	Loss: 0.3518	LR: 0.004000
Training Epoch: 29 [3840/9494]	Loss: 0.3307	LR: 0.004000
Training Epoch: 29 [4096/9494]	Loss: 0.3554	LR: 0.004000
Training Epoch: 29 [4352/9494]	Loss: 0.3706	LR: 0.004000
Training Epoch: 29 [4608/9494]	Loss: 0.3723	LR: 0.004000
Training Epoch: 29 [4864/9494]	Loss: 0.3405	LR: 0.004000
Training Epoch: 29 [5120/9494]	Loss: 0.3439	LR: 0.004000
Training Epoch: 29 [5376/9494]	Loss: 0.3387	LR: 0.004000
Training Epoch: 29 [5632/9494]	Loss: 0.3234	LR: 0.004000
Training Epoch: 29 [5888/9494]	Loss: 0.3675	LR: 0.004000
Training Epoch: 29 [6144/9494]	Loss: 0.3724	LR: 0.004000
Training Epoch: 29 [6400/9494]	Loss: 0.3464	LR: 0.004000
Training Epoch: 29 [6656/9494]	Loss: 0.3365	LR: 0.004000
Training Epoch: 29 [6912/9494]	Loss: 0.3897	LR: 0.004000
Training Epoch: 29 [7168/9494]	Loss: 0.3624	LR: 0.004000
Training Epoch: 29 [7424/9494]	Loss: 0.3326	LR: 0.004000
Training Epoch: 29 [7680/9494]	Loss: 0.3357	LR: 0.004000
Training Epoch: 29 [7936/9494]	Loss: 0.2880	LR: 0.004000
Training Epoch: 29 [8192/9494]	Loss: 0.3451	LR: 0.004000
Training Epoch: 29 [8448/9494]	Loss: 0.3497	LR: 0.004000
Training Epoch: 29 [8704/9494]	Loss: 0.3051	LR: 0.004000
Training Epoch: 29 [8960/9494]	Loss: 0.3898	LR: 0.004000
Training Epoch: 29 [9216/9494]	Loss: 0.3299	LR: 0.004000
Training Epoch: 29 [9472/9494]	Loss: 0.3695	LR: 0.004000
Training Epoch: 29 [9494/9494]	Loss: 0.6217	LR: 0.004000
Epoch 29 - Average Train Loss: 0.3519, Train Accuracy: 0.8474
Epoch 29 training time consumed: 137.75s
Evaluating Network.....
Test set: Epoch: 29, Average loss: 0.0036, Accuracy: 0.8450, Time consumed:8.19s
Training Epoch: 30 [256/9494]	Loss: 0.3570	LR: 0.004000
Training Epoch: 30 [512/9494]	Loss: 0.4465	LR: 0.004000
Training Epoch: 30 [768/9494]	Loss: 0.4009	LR: 0.004000
Training Epoch: 30 [1024/9494]	Loss: 0.4695	LR: 0.004000
Training Epoch: 30 [1280/9494]	Loss: 0.3914	LR: 0.004000
Training Epoch: 30 [1536/9494]	Loss: 0.4174	LR: 0.004000
Training Epoch: 30 [1792/9494]	Loss: 0.3916	LR: 0.004000
Training Epoch: 30 [2048/9494]	Loss: 0.4475	LR: 0.004000
Training Epoch: 30 [2304/9494]	Loss: 0.4458	LR: 0.004000
Training Epoch: 30 [2560/9494]	Loss: 0.4230	LR: 0.004000
Training Epoch: 30 [2816/9494]	Loss: 0.3784	LR: 0.004000
Training Epoch: 30 [3072/9494]	Loss: 0.3797	LR: 0.004000
Training Epoch: 30 [3328/9494]	Loss: 0.3596	LR: 0.004000
Training Epoch: 30 [3584/9494]	Loss: 0.3964	LR: 0.004000
Training Epoch: 30 [3840/9494]	Loss: 0.3864	LR: 0.004000
Training Epoch: 30 [4096/9494]	Loss: 0.3693	LR: 0.004000
Training Epoch: 30 [4352/9494]	Loss: 0.3995	LR: 0.004000
Training Epoch: 30 [4608/9494]	Loss: 0.4117	LR: 0.004000
Training Epoch: 30 [4864/9494]	Loss: 0.3341	LR: 0.004000
Training Epoch: 30 [5120/9494]	Loss: 0.3119	LR: 0.004000
Training Epoch: 30 [5376/9494]	Loss: 0.3380	LR: 0.004000
Training Epoch: 30 [5632/9494]	Loss: 0.3749	LR: 0.004000
Training Epoch: 30 [5888/9494]	Loss: 0.4216	LR: 0.004000
Training Epoch: 30 [6144/9494]	Loss: 0.3354	LR: 0.004000
Training Epoch: 30 [6400/9494]	Loss: 0.3249	LR: 0.004000
Training Epoch: 30 [6656/9494]	Loss: 0.3089	LR: 0.004000
Training Epoch: 30 [6912/9494]	Loss: 0.3667	LR: 0.004000
Training Epoch: 30 [7168/9494]	Loss: 0.3290	LR: 0.004000
Training Epoch: 30 [7424/9494]	Loss: 0.3278	LR: 0.004000
Training Epoch: 30 [7680/9494]	Loss: 0.3101	LR: 0.004000
Training Epoch: 30 [7936/9494]	Loss: 0.3471	LR: 0.004000
Training Epoch: 30 [8192/9494]	Loss: 0.3446	LR: 0.004000
Training Epoch: 30 [8448/9494]	Loss: 0.3508	LR: 0.004000
Training Epoch: 30 [8704/9494]	Loss: 0.4117	LR: 0.004000
Training Epoch: 30 [8960/9494]	Loss: 0.3925	LR: 0.004000
Training Epoch: 30 [9216/9494]	Loss: 0.2844	LR: 0.004000
Training Epoch: 30 [9472/9494]	Loss: 0.3617	LR: 0.004000
Training Epoch: 30 [9494/9494]	Loss: 0.5197	LR: 0.004000
Epoch 30 - Average Train Loss: 0.3746, Train Accuracy: 0.8352
Epoch 30 training time consumed: 137.55s
Evaluating Network.....
Test set: Epoch: 30, Average loss: 0.0023, Accuracy: 0.7874, Time consumed:8.24s
Training Epoch: 31 [256/9494]	Loss: 0.3160	LR: 0.004000
Training Epoch: 31 [512/9494]	Loss: 0.3748	LR: 0.004000
Training Epoch: 31 [768/9494]	Loss: 0.3889	LR: 0.004000
Training Epoch: 31 [1024/9494]	Loss: 0.4685	LR: 0.004000
Training Epoch: 31 [1280/9494]	Loss: 0.3658	LR: 0.004000
Training Epoch: 31 [1536/9494]	Loss: 0.3719	LR: 0.004000
Training Epoch: 31 [1792/9494]	Loss: 0.3343	LR: 0.004000
Training Epoch: 31 [2048/9494]	Loss: 0.4006	LR: 0.004000
Training Epoch: 31 [2304/9494]	Loss: 0.3778	LR: 0.004000
Training Epoch: 31 [2560/9494]	Loss: 0.4018	LR: 0.004000
Training Epoch: 31 [2816/9494]	Loss: 0.3484	LR: 0.004000
Training Epoch: 31 [3072/9494]	Loss: 0.3344	LR: 0.004000
Training Epoch: 31 [3328/9494]	Loss: 0.3170	LR: 0.004000
Training Epoch: 31 [3584/9494]	Loss: 0.3710	LR: 0.004000
Training Epoch: 31 [3840/9494]	Loss: 0.2927	LR: 0.004000
Training Epoch: 31 [4096/9494]	Loss: 0.3554	LR: 0.004000
Training Epoch: 31 [4352/9494]	Loss: 0.3611	LR: 0.004000
Training Epoch: 31 [4608/9494]	Loss: 0.3815	LR: 0.004000
Training Epoch: 31 [4864/9494]	Loss: 0.3286	LR: 0.004000
Training Epoch: 31 [5120/9494]	Loss: 0.3352	LR: 0.004000
Training Epoch: 31 [5376/9494]	Loss: 0.3609	LR: 0.004000
Training Epoch: 31 [5632/9494]	Loss: 0.3003	LR: 0.004000
Training Epoch: 31 [5888/9494]	Loss: 0.3096	LR: 0.004000
Training Epoch: 31 [6144/9494]	Loss: 0.3724	LR: 0.004000
Training Epoch: 31 [6400/9494]	Loss: 0.3224	LR: 0.004000
Training Epoch: 31 [6656/9494]	Loss: 0.3252	LR: 0.004000
Training Epoch: 31 [6912/9494]	Loss: 0.3204	LR: 0.004000
Training Epoch: 31 [7168/9494]	Loss: 0.3044	LR: 0.004000
Training Epoch: 31 [7424/9494]	Loss: 0.3236	LR: 0.004000
Training Epoch: 31 [7680/9494]	Loss: 0.3126	LR: 0.004000
Training Epoch: 31 [7936/9494]	Loss: 0.3213	LR: 0.004000
Training Epoch: 31 [8192/9494]	Loss: 0.2787	LR: 0.004000
Training Epoch: 31 [8448/9494]	Loss: 0.3897	LR: 0.004000
Training Epoch: 31 [8704/9494]	Loss: 0.3602	LR: 0.004000
Training Epoch: 31 [8960/9494]	Loss: 0.2898	LR: 0.004000
Training Epoch: 31 [9216/9494]	Loss: 0.3290	LR: 0.004000
Training Epoch: 31 [9472/9494]	Loss: 0.3458	LR: 0.004000
Training Epoch: 31 [9494/9494]	Loss: 0.4171	LR: 0.004000
Epoch 31 - Average Train Loss: 0.3459, Train Accuracy: 0.8509
Epoch 31 training time consumed: 137.06s
Evaluating Network.....
Test set: Epoch: 31, Average loss: 0.0018, Accuracy: 0.8203, Time consumed:8.10s
Valid (Test) Dl:  2065
Train Dl:  10548
Retain Train Dl:  9494
Forget Train Dl:  1054
Retain Valid Dl:  9494
Forget Valid Dl:  1054
retain_prob Distribution: 2065 samples
test_prob Distribution: 2065 samples
forget_prob Distribution: 1054 samples
Set1 Distribution: 1054 samples
Set2 Distribution: 1054 samples
Set1 Distribution: 1054 samples
Set2 Distribution: 1054 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Set1 Distribution: 2065 samples
Set2 Distribution: 2065 samples
Test Accuracy: 82.06698608398438
Retain Accuracy: 79.2267837524414
Zero-Retain Forget (ZRF): 0.9375232458114624
Membership Inference Attack (MIA): 0.4715370018975332
Forget vs Retain Membership Inference Attack (MIA): 0.4976303317535545
Forget vs Test Membership Inference Attack (MIA): 0.5379146919431279
Test vs Retain Membership Inference Attack (MIA): 0.5351089588377724
Train vs Test Membership Inference Attack (MIA): 0.5
Forget Set Accuracy (Df): 78.38020324707031
Method Execution Time: 5690.03 seconds
